paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:a85b6e1281b4c5f84e891b0897affe5971d4ff7a
[ "The paper presents an algorithm for performing min-max optimisation without gradients and analyses its convergence. The algorithm is evaluated for the min-max problems that arise in the context of adversarial attacks. The presented algorithm is a natural application of a zeroth-order gradient estimator and the authors also prove that the algorithm has a sublinear convergence rate (in a specific sense). ", "This paper considers zeroth-order method for min-max optimization (ZO-MIN-MAX) in two cases: one-sided black box (for outer minimization) and two-sided black box (for both inner maximization and outer minimization). Convergence analysis is carefully provided to show that ZO-MIN-MAX converges to a neighborhood of stationary points. Then, the authors empirically compare several methods on " ]
In this paper, we study the problem of constrained robust (min-max) optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values. We present a principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an alternating projected stochastic gradient descent-ascent method, where the former only requires a small number of function queries and the later needs just one-step descent/ascent update. We show that the proposed framework, referred to as ZO-Min-Max, has a sub-linear convergence rate under mild conditions and scales gracefully with problem size. From an application side, we explore a promising connection between black-box min-max optimization and black-box evasion and poisoning attacks in adversarial machine learning (ML). Our empirical evaluations on these use cases demonstrate the effectiveness of our approach and its scalability to dimensions that prohibit using recent black-box solvers.
[]
[ { "authors": [ "Charu Aggarwal", "Djallel Bouneffouf", "Horst Samulowitz", "Beat Buesser", "Thanh Hoang", "Udayan Khurana", "Sijia Liu", "Tejaswini Pedapati", "Parikshit Ram", "Ambrish Rawat" ], "title": "How can ai automate end-to-end data science", "venue": null, "year": 1910 }, { "authors": [ "Abdullah Al-Dujaili", "Erik Hemberg", "Una-May O’Reilly" ], "title": "Approximating nash equilibria for black-box games: A bayesian optimization approach", "venue": "arXiv preprint arXiv:1804.10586,", "year": 2018 }, { "authors": [ "Abdullah Al-Dujaili", "Alex Huang", "Erik Hemberg", "Una-May O’Reilly" ], "title": "Adversarial deep learning for robust detection of binary encoded malware", "venue": "IEEE Security and Privacy Workshops (SPW),", "year": 2018 }, { "authors": [ "Abdullah Al-Dujaili", "Shashank Srikant", "Erik Hemberg", "Una-May O’Reilly" ], "title": "On the application of danskin’s theorem to derivative-free minimax optimization", "venue": "arXiv preprint arXiv:1805.06322,", "year": 2018 }, { "authors": [ "Charles Audet", "Warren Hare" ], "title": "Derivative-free and blackbox", "venue": null, "year": 2017 }, { "authors": [ "J. Bernstein", "Y.-X. Wang", "K. Azizzadenesheli", "A. Anandkumar" ], "title": "signsgd: compressed optimisation for non-convex problems", "venue": null, "year": 2018 }, { "authors": [ "Ilija Bogunovic", "Jonathan Scarlett", "Stefanie Jegelka", "Volkan Cevher" ], "title": "Adversarially robust optimization with gaussian processes", "venue": "In Proc. of Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jürgen Branke", "Johanna Rosenbusch" ], "title": "New approaches to coevolutionary worst-case optimization", "venue": "In International Conference on Parallel Problem Solving from Nature,", "year": 2008 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Tianyi Chen", "Georgios B Giannakis" ], "title": "Bandit convex optimization for scalable and dynamic IoT management", "venue": "IEEE Internet of Things Journal,", "year": 2018 }, { "authors": [ "X. Chen", "S. Liu", "R. Sun", "M. Hong" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A.R. Conn", "K. Scheinberg", "L.N. Vicente" ], "title": "Introduction to derivative-free optimization, volume", "venue": null, "year": 2009 }, { "authors": [ "John M Danskin" ], "title": "The theory of max-min, with applications", "venue": "SIAM Journal on Applied Mathematics,", "year": 1966 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Amit Dhurandhar", "Tejaswini Pedapati", "Avinash Balakrishnan", "Pin-Yu Chen", "Karthikeyan Shanmugam", "Ruchir Puri" ], "title": "Model agnostic contrastive explanations for structured data", "venue": null, "year": 1906 }, { "authors": [ "J.C. Duchi", "M.I. Jordan", "M.J. Wainwright", "A. Wibisono" ], "title": "Optimal rates for zero-order convex optimization: The power of two function evaluations", "venue": "IEEE Transactions on Information Theory,", "year": 2015 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "On the convergence theory of gradient-based model-agnostic meta-learning algorithms", "venue": "arXiv preprint arXiv:1908.10400,", "year": 2019 }, { "authors": [ "Chris Finlay", "Adam M Oberman" ], "title": "Scaleable input gradient regularization for adversarial robustness", "venue": "arXiv preprint arXiv:1905.11468,", "year": 2019 }, { "authors": [ "Lampros Flokas", "Georgios Piliouras" ], "title": "Efficiently avoiding saddle points with zero order methods: No gradients required", "venue": null, "year": 1910 }, { "authors": [ "X. Gao", "B. Jiang", "S. Zhang" ], "title": "On the information-adaptive variants of the ADMM: an iteration complexity perspective", "venue": "Optimization Online,", "year": 2014 }, { "authors": [ "S. Ghadimi", "G. Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "S. Ghadimi", "G. Lan", "H. Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "G. Gidel", "T. Jebara", "S. Lacoste-Julien" ], "title": "Frank-Wolfe Algorithms for Saddle Point Problems", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "E.Y. Hamedani", "A. Jalilzadeh", "N.S. Aybat", "U.V. Shanbhag" ], "title": "Iteration complexity of randomized primal-dual methods for convex-concave saddle point problems", "venue": "arXiv preprint arXiv:1806.04118,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jeffrey W Herrmann" ], "title": "A genetic algorithm for minimax optimization problems", "venue": "In CEC,", "year": 1999 }, { "authors": [ "A. Ilyas", "L. Engstrom", "A. Athalye", "J. Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "arXiv preprint arXiv:1804.08598,", "year": 2018 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Prior convictions: Black-box adversarial attacks with bandits and priors", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matthew Jagielski", "Alina Oprea", "Battista Biggio", "Chang Liu", "Cristina Nita-Rotaru", "Bo Li" ], "title": "Manipulating machine learning: Poisoning attacks and countermeasures for regression learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2018 }, { "authors": [ "Mikkel T Jensen" ], "title": "A new look at solving minimax problems with coevolutionary genetic algorithms", "venue": "In Metaheuristics: computer decision-making,", "year": 2003 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Michael I Jordan" ], "title": "Minmax optimization: Stable limit points of gradient descent ascent are locally optimal", "venue": null, "year": 1902 }, { "authors": [ "Eric Jones", "Travis Oliphant", "Pearu Peterson" ], "title": "SciPy: Open source scientific tools for Python", "venue": null, "year": 2001 }, { "authors": [ "Jeffrey Larson", "Matt Menickelly", "Stefan M Wild" ], "title": "Derivative-free optimization methods", "venue": "Acta Numerica,", "year": 2019 }, { "authors": [ "J. Liu", "Weiming Zhang", "Nenghai Yu" ], "title": "Iterative ensemble adversarial attack", "venue": "Caad", "year": 2018 }, { "authors": [ "S. Liu", "J. Chen", "P.-Y. Chen", "A.O. Hero" ], "title": "Zeroth-order online admm: Convergence analysis and applications", "venue": "In Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Sijia Liu", "Pin-Yu Chen", "Xiangyi Chen", "Mingyi Hong" ], "title": "signSGD via zeroth-order oracle", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "S. Lu", "I. Tsaknakis", "M. Hong" ], "title": "Block alternating optimization for non-convex min-max problems: Algorithms and applications in signal processing and communications", "venue": "In Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2019 }, { "authors": [ "S. Lu", "I. Tsaknakis", "M. Hong", "Y. Chen" ], "title": "Hybrid block successive approximation for one-sided non-convex min-max problems: Algorithms and applications", "venue": null, "year": 1902 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Y. Nesterov" ], "title": "Dual extrapolation and its applications to solving variational inequalities and related problems", "venue": "Mathematical Programming,", "year": 2007 }, { "authors": [ "Y. Nesterov", "V. Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Foundations of Computational Mathematics,", "year": 2015 }, { "authors": [ "M. Nouiehed", "M. Sanjabi", "J.D. Lee", "M. Razaviyayn" ], "title": "Solving a class of non-convex min-max games using iterative first order methods", "venue": null, "year": 1902 }, { "authors": [ "Victor Picheny", "Mickael Binois", "Abderrahmane Habbal" ], "title": "A bayesian optimization approach to find nash equilibria", "venue": "Journal of Global Optimization,", "year": 2019 }, { "authors": [ "Qi Qian", "Shenghuo Zhu", "Jiasheng Tang", "Rong Jin", "Baigui Sun", "Hao Li" ], "title": "Robust optimization over multiple domains", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "H. Rafique", "M. Liu", "Q. Lin", "T. Yang" ], "title": "Non-convex min-max optimization: Provable algorithms and applications in machine learning", "venue": null, "year": 1810 }, { "authors": [ "Luis Miguel Rios", "Nikolaos V Sahinidis" ], "title": "Derivative-free optimization: a review of algorithms and comparison of software implementations", "venue": "Journal of Global Optimization,", "year": 2013 }, { "authors": [ "M. Sanjabi", "J. Ba", "M. Razaviyayn", "J.D. Lee" ], "title": "On the convergence and robustness of training gans with regularized optimal transport", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Maziar Sanjabi", "Jimmy Ba", "Meisam Razaviyayn", "Jason D Lee" ], "title": "On the convergence and robustness of training gans with regularized optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tom Schmiedlechner", "Abdullah Al-Dujaili", "Erik Hemberg", "Una-May O’Reilly" ], "title": "Towards distributed coevolutionary gans", "venue": "arXiv preprint arXiv:1807.08194,", "year": 2018 }, { "authors": [ "O. Shamir" ], "title": "An optimal algorithm for bandit and zero-order convex optimization with two-point feedback", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Jacob Steinhardt", "Pang Wei W Koh", "Percy S Liang" ], "title": "Certified defenses for data poisoning attacks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Brandon Tran", "Jerry Li", "Aleksander Madry" ], "title": "Spectral signatures in backdoor attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "C.-C. Tu", "P. Ting", "P.-Y. Chen", "S. Liu", "H. Zhang", "J. Yi", "C.-J. Hsieh", "S.-M. Cheng" ], "title": "Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks", "venue": "arXiv preprint arXiv:1805.11770,", "year": 2018 }, { "authors": [ "Abraham Wald" ], "title": "Statistical decision functions which minimize the maximum risk", "venue": "Annals of Mathematics,", "year": 1945 }, { "authors": [ "Bolun Wang", "Yuanshun Yao", "Shawn Shan", "Huiying Li", "Bimal Viswanath", "Haitao Zheng", "Ben Y Zhao" ], "title": "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks", "venue": "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks,", "year": 2019 }, { "authors": [ "Jingkang Wang", "Tianyun Zhang", "Sijia Liu", "Pin-Yu Chen", "Jiacen Xu", "Makan Fardad", "Bo Li" ], "title": "Beyond adversarial training: Min-max optimization in adversarial attack and defense, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Rachel Ward", "Xiaoxia Wu", "Leon Bottou" ], "title": "AdaGrad stepsizes: Sharp convergence over nonconvex landscapes", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Richard A Watson", "Jordan B Pollack" ], "title": "Coevolutionary dynamics in a minimal substrate", "venue": "In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation,", "year": 2001 }, { "authors": [ "Bogunovic" ], "title": "In Figure A1, we compare the convergence performance and computation time of ZO-Min-Max with the BO based approach STABLEOPT proposed in Bogunovic et al. (2018). Here we choose the same initial point for both ZO-Min-Max and STABLEOPT. And we set the same number of function queries per iteration for ZO-Min-Max (with q = 1) and STABLEOPT", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In numerous real-world applications, one is faced with various forms of adversary that are not accounted for by standard optimization algorithms. For instance, when training a machine learning model on user-provided data, malicious users can carry out a data poisoning attack: providing false data with the aim of corrupting the learned model (Steinhardt et al., 2017; Tran et al., 2018; Jagielski et al., 2018). At inference time, malicious users can evade detection of multiple models in the form of adversarial example attacks (Goodfellow et al., 2014; Liu et al., 2016; 2018a). Min-max (robust) optimization is a natural framework to address adversarial (worst-case) robustness (Madry et al., 2017b; Al-Dujaili et al., 2018b). It converts a standard minimization problem into a composition of an inner maximization problem and an outer minimization problem.\nMin-max optimization problems have been studied for multiple decades (Wald, 1945), and the majority of the proposed methods assume access to first-order (FO) information, i.e. gradients, to find or approximate robust solutions (Nesterov, 2007; Gidel et al., 2017; Hamedani et al., 2018; Qian et al., 2019; Rafique et al., 2018; Sanjabi et al., 2018b; Lu et al., 2019; Nouiehed et al., 2019; Lu et al., 2019; Jin et al., 2019). In this paper, we focus on design and analysis of black-box (gradient-free) min-max optimization methods, where gradients are neither symbolically nor numerically available, or they are tedious to compute (Conn et al., 2009). Our study is particularly motivated by the design of data poisoning and evasion adversarial attacks from black-box machine learning (ML) or deep learning (DL) systems, whose internal configuration and operating mechanism are unknown to adversaries. The extension of min-max optimization from the FO domain to the gradient-free regime is challenging since the solver suffers from uncertainties in both black-box objective functions and optimization procedure and do not scale well to high-dimensional problems.\nWe develop a provable and unified black-box min-max stochastic optimization method by integrating a query-efficient randomized zeroth-order (ZO) gradient estimator with a computation-efficient alternating gradient descent-ascent framework, where the former requires a small number of function queries to build a gradient estimate, and the latter needs just one-step descent/ascent update. Recently, ZO optimization has attracted increasing attention in solving ML/DL problems. For example, ZO optimization serves as a powerful and practical tool for generation of black-box adversarial examples\nto evaluate the adversarial robustness of ML/DL models (Chen et al., 2017; Ilyas et al., 2018; Tu et al., 2018; Ilyas et al., 2019). ZO optimization can also help to solve automated ML problems, where the gradients with respect to ML pipeline configuration parameters are intractable (Aggarwal et al., 2019). Furthermore, ZO optimization provides computationally-efficient alternatives of high-order optimization methods for solving complex ML/DL tasks, e.g., robust training by leveraging input gradient or curvature regularization (Finlay & Oberman, 2019; Moosavi-Dezfooli et al., 2019), modelagnostic meta-learning (Fallah et al., 2019), network control and management (Chen & Giannakis, 2018), and data processing in high dimension (Liu et al., 2018b). Other recent applications include generating model-agnostic contrastive explanations (Dhurandhar et al., 2019) and escaping saddle points (Flokas et al., 2019). Current studies (Ghadimi & Lan, 2013; Nesterov & Spokoiny, 2015; Duchi et al., 2015; Ghadimi et al., 2016; Shamir, 2017; Liu et al., 2019) suggested that ZO methods typically agree with the iteration complexity of FO methods but encounter a slowdown factor up to a small-degree polynomial of the problem dimensionality. To the best of our knowledge, it was an open question whether any convergence rate analysis can be established for black-box min-max optimization.\nContribution. We summarize our contributions as follows. (i) We first identify a class of black-box attack and robust learning problems which turn out to be min-max black-box optimization problems. (ii) We propose a scalable and principled framework (ZO-Min-Max) for solving constrained minmax saddle point problems under both one-sided and two-sided black-box objective functions. Here the one-sided setting refers to the scenario where only the outer minimization problem is black-box. (iii) We provide a novel convergence analysis characterizing the number of objective function evaluations required to attain locally robust solution to black-box min-max problems with nonconvex outer minimization and strongly concave inner maximization. Our analysis handles stochasticity in both objective function and ZO gradient estimator, and shows that ZO-Min-Max yields O(1/T + 1/b + d/q) convergence rate, where T is number of iterations, b is mini-batch size, q is number of random direction vectors used in ZO gradient estimation, and d is number of optimization variables. (iv) We demonstrate the effectiveness of our proposal in practical data poisoning and evasion attack generation problems.1" }, { "heading": "2 RELATED WORK", "text": "FO min-max optimization. Gradient-based methods have been applied with celebrated success to solve min-max problems such as robust learning (Qian et al., 2019), generative adversarial networks (GANs) (Sanjabi et al., 2018a), adversarial training (Al-Dujaili et al., 2018b; Madry et al., 2017a), and robust adversarial attack generation (Wang et al., 2019b). Some of FO methods are motivated by theoretical justifications based on Danskin’s theorem (Danskin, 1966), which implies that the negative of the gradient of the outer minimization problem at inner maximizer is a descent direction (Madry et al., 2017a). Convergence analysis of other FO min-max methods has been studied under different problem settings, e.g., (Lu et al., 2019; Qian et al., 2019; Rafique et al., 2018; Sanjabi et al., 2018b; Nouiehed et al., 2019). It was shown in (Lu et al., 2019) that a deterministic FO min-max algorithm has O(1/T ) convergence rate. In (Qian et al., 2019; Rafique et al., 2018), stochastic FO min-max methods have also been proposed, which yield the convergence rate in the order of O(1/ √ T ) and O(1/T 1/4), respectively. However, these works were restricted to unconstrained optimization at the minimization side. In (Sanjabi et al., 2018b), noncovnex-concave min-max problems were studied, but the proposed analysis requires solving the maximization problem only up to some small error. In (Nouiehed et al., 2019), the O(1/T ) convergence rate was proved for nonconvex-nonconcave min-max problems under Polyak- Łojasiewicz conditions. Different from the aforementioned FO settings, ZO min-max stochastic optimization suffers randomness from both stochastic sampling in objective function and ZO gradient estimation, and this randomness would be coupled in alternating gradient descent-descent steps and thus make it more challenging in convergence analysis.\nGradient-free min-max optimization. In the black-box setup, coevolutionary algorithms were used extensively to solve min-max problems (Herrmann, 1999; Schmiedlechner et al., 2018). However, they may oscillate and never converge to a solution due to pathological behaviors such as focusing and relativism (Watson & Pollack, 2001). Fixes to these issues have been proposed and analyzed—e.g.,\n1Source code will be released.\nasymmetric fitness (Jensen, 2003; Branke & Rosenbusch, 2008). In (Al-Dujaili et al., 2018c), the authors employed an evolution strategy as an unbiased approximate for the descent direction of the outer minimization problem and showed empirical gains over coevlutionary techniques, albeit without any theoretical guarantees. Min-max black-box problems can also be addressed by resorting to direct search and model-based descent and trust region methods (Audet & Hare, 2017; Larson et al., 2019; Rios & Sahinidis, 2013). However, these methods lack convergence rate analysis and are difficult to scale to high-dimensional problems. For example, the off-the-shelf model-based solver COBYLA only supports problems with 216 variables at maximum in SciPy Python library (Jones et al., 2001), which is even smaller than the size of a single ImageNet image. The recent work (Bogunovic et al., 2018) proposed a robust Bayesian optimization (BO) algorithm and established a theoretical lower bound on the required number of the min-max objective evaluations to find a near-optimal point. However, BO approaches are often tailored to low-dimensional problems and its computational complexity prohibits scalable application. From a game theory perspective, the min-max solution for some problems correspond to the Nash equilibrium between the outer minimizer and the inner maximizer, and hence black-box Nash equilibria solvers can be used (Picheny et al., 2019; Al-Dujaili et al., 2018a). This setup, however, does not always hold in general. Our work contrasts with the above lines of work in designing and analyzing black-box min-max techniques that are both scalable and theoretically grounded." }, { "heading": "3 PROBLEM SETUP", "text": "In this section, we define the black-box min-max problem and briefly motivate its applications. By min-max, we mean that the problem is a composition of inner maximization and outer minimization of the objective function f . By black-box, we mean that the objective function f is only accessible via point-wise functional evaluations. Mathematically, we have\nmin x∈X max y∈Y f(x,y) (1)\nwhere x and y are optimization variables, f is a differentiable objective function, and X ⊂ Rdx and Y ⊂ Rdy are compact convex sets. For ease of notation, let dx = dy = d. In (1), the objective function f could represent either a deterministic loss or stochastic loss f(x,y) = Eξ∼p [f(x,y; ξ)], where ξ is a random variable following the distribution p. In this paper, we consider the stochastic variant in (1).\nWe focus on two black-box scenarios in which gradients (or stochastic gradients under randomly sampled ξ) of f w.r.t. x or y are not accessed.\n(a) One-sided black-box: f(x,y) is a white box w.r.t. y but a black box w.r.t. x.\n(b) Two-sided black-box: f(x,y) is a black box w.r.t. both x and y.\nMotivation of setup (a) and (b). Both setups are well motivated from the design of black-box adversarial attacks. The formulation of the one-sided black-box min-max problem corresponds to a particular type of attack, known as black-box ensemble evasion attack, where the attacker generates adversarial examples (i.e., crafted examples with slight perturbations for misclassification at the testing phase) and optimizes its worst-case performance against an ensemble of black-box classifiers and/or example classes. The formulation of two-sided black-box min-max problem represents another type of attack at the training phase, known as black-box poisoning attack, where the attacker deliberately influences the training data (by injecting poisoned samples) to manipulate the results of a black-box predictive model.\nAlthough problems of designing ensemble evasion attack (Liu et al., 2016; 2018a; Wang et al., 2019b) and data poisoning attack (Jagielski et al., 2018; Wang et al., 2019a) have been studied in the literature, most of them assumed that the adversary has the full knowledge of the target ML model, leading to an impractical white-box attack setting. By contrast, we provide a solution to min-max attack generation under black-box ML models. We refer readers to Section 6 for further discussion and demonstration of our framework on these problems." }, { "heading": "4 ZO-MIN-MAX: A FRAMEWORK FOR BLACK-BOX MIN-MAX OPTIMIZATION", "text": "Our interest is in a scalable and theoretically principled framework for black-box min-max problems of the form (1). To this end, we first introduce a randomized gradient estimator that only requires a few number of point-wise function evaluations. Based on that, we then propose a ZO alternating projected gradient method to solve (1) under both one-sided and two-sided black-box setups.\nRandomized gradient estimator. In the ZO setting, we adopt a randomized gradient estimator to estimate the gradient of a function with the generic form h(x) := Eξ[h(x; ξ)] (Liu et al., 2019; Gao et al., 2014),\n∇̂xh(x) = 1\nbq ∑ j∈I q∑ i=1 d[h(x + µui; ξj)− h(x; ξj)] µ ui, (2)\nwhere d is number of variables, I denotes the mini-batch set of b i.i.d. stochastic samples {ξj}bj=1, {ui}qi=1 are q i.i.d. random direction vectors drawn uniformly from the unit sphere, and µ > 0 is a smoothing parameter. We note that the ZO gradient estimator (2) involves randomness from both stochastic sampling w.r.t. ui as well as the random direction sampling w.r.t. ξj . It is known from (Gao et al., 2014, Lemma 2) that ∇̂xh(x) provides an unbiased estimate of the gradient of the smoothing function of h rather than the true gradient of h. Here the smoothing function of h is defined by hµ(x) = Ev[h(x+µv)], where v follows the uniform distribution over the unit Euclidean ball. Besides the bias, we provide an upper bound on the variance of (2) in Lemma 1. Lemma 1. Suppose that for all ξ, h(x; ξ) has Lh Lipschitz continuous gradients and the gradient of h(x; ξ) is upper bounded as ‖∇xh(x; ξ)‖22 ≤ η2 at x ∈ Rd. Then E [ ∇̂xh(x) ] = ∇xhµ(x),\nE [ ‖∇̂xh(x)−∇xhµ(x)‖22 ] ≤ 2η 2\nb +\n4dη2 + µ2L2hd 2\nq := σ2(Lh, µ, b, q, d), (3)\nwhere the expectation is taken over all randomness.\nProof: See Appendix A.2. In Lemma 1, if we choose µ ≤ 1/ √ d, then the variance bound is given by O(1/b + d/q). In our\nproblem setting (1), the ZO gradients ∇̂xf(x,y) and ∇̂yf(x,y) follow the generic form of (2) by fixing y and letting h(·) := f(·,y) or by fixing x and letting h(·) := f(x, ·), respectively.\nAlgorithmic framework. To solve problem (1), we alternatingly perform ZO projected gradient descent/ascent method for updating x and y. Specifically, for one-sided ZO min-max optimization, the ZO projected gradient descent (ZO-PGD) over x yields\nx(t) = projX ( x(t−1) − α∇̂xf ( x(t−1),y(t−1) )) , (4)\nwhere t is the iteration index, ∇̂xf denotes the ZO gradient estimate of f w.r.t. x, α > 0 is the learning rate at the x-minimization step, and projX (a) signifies the projection of a onto X , given by the solution to the problem minx∈X ‖x− a‖22. For two-sided ZO min-max optimization, in addition to (4), our update on y obeys the ZO projected gradient ascent (ZO-PGA)\ny(t) = projY ( y(t−1) + β∇̂yf ( x(t),y(t−1) )) , (5)\nwhere β > 0 is the learning rate at the y-maximization step. The proposed method is named as ZO-Min-Max; see Algorithm 1.\nWhy estimates gradient rather than distribution of function values? Besides ZO optimization using random gradient estimates, the black-box min-max problem (1) can also be solved using the Bayesian optimization (BO) approach, e.g., (Bogunovic et al., 2018; Al-Dujaili et al., 2018a). The core idea of BO is to approximate the objective function as a Gaussian process (GP) learnt from the history of function values at queried points. Based on GP, the solution to problem (1) is then updated by maximizing a certain reward function, known as acquisition function. The advantage of BO is\nits mild requirements on the setting of black-box problems, e.g., at the absence of differentiability. However, BO usually does not scale beyond low-dimensional problems since learning the accurate GP model and solving the acquisition problem takes intensive computation cost per iteration. By contrast, our proposed method is more efficient, and mimics the first-order method by just using the random gradient estimate (2) as the descent/ascent direction. In Figure A1, we compare ZO-Min-Max with the BO based STABLEOPT algorithm proposed by (Bogunovic et al., 2018) through a toy example shown in (Bogunovic et al., 2018, Sec. 5). As we can see, ZO-Min-Max not only achieves more accurate solution but also requires less computation time. We refer readers to Appendix B for details.\nAlgorithm 1 ZO-Min-Max to solve problem (1)\n1: Input: given x(0) and y(0), learning rates α and β, the number of random directions q, and the possible mini-batch size b for stochastic optimization 2: for t = 1, 2, . . . , T do 3: x-step: perform ZO-PGD (4) 4: y-step: 5: if f(x(t),y) is black box w.r.t. y then 6: perform ZO-PGA (5) 7: else 8: perform PGA using∇yf(x(t),y(t−1)) as ascent direction in (5) 9: end if\n10: end for\nTechnical challenges in convergence analysis. The convergence analysis of ZO-Min-Max is more challenging than the case of FO min-max algorithms. Besides the inaccurate estimate of the gradient, the stochasticity of the estimator makes the convergence analysis sufficiently different from the FO deterministic case (Lu et al., 2019; Qian et al., 2019), since the errors in minimization and maximization are coupled as the algorithm proceeds.\nMoreover, the conventioanl analysis of ZO optimization for single-objective problems cannot directly be applied to ZO-Min-Max. Even at the one-sided black-box setting, ZO-Min-Max conducts alternating optimization using one-step ZO-PGD and PGA with respect to x and y respectively. This is different from a reduced ZO optimization problem with respect to x only by solving problem minx∈X h(x) := miny∈Y f(x,y), which requires the algorithm to obtain the solution to miny∈Y f(x,y) at a given x (when querying h(x) for a ZO gradient estimation). However, this process is usually non-trivial or computationally intensive.\nIn particular, one key difficulty stems from the alternating algorithmic structure (namely, primal-dual framework) as the problem is in the min-max form, which leads to opposite optimization directions (minimization vs maximization) over variable x and y respectively. Even applying ZO optimization only to one side, it needs to quantify the effect of ZO gradient estimation on the descent over both x and y. We provide a detailed convergence analysis of ZO-Min-Max in the next section." }, { "heading": "5 CONVERGENCE ANALYSIS", "text": "We begin by elaborating on assumptions and notations used in analyzing the convergence of ZO-MinMax (Algorithm 1).\nA1: In (1), f(x,y) is continuously differentiable, and is strongly concave w.r.t. y with parameter γ > 0, namely, given x ∈ X , f(x,y1) ≤ f(x,y2) +∇yf(x,y2)T (y1 − y2) − γ2 ‖y1 − y2‖\n2 for all points y1,y2 ∈ Y . And f is lower bounded by a finite number f∗ and has bounded gradients ‖∇xf(x,y; ξ)‖ ≤ η2 and ‖∇yf(x,y; ξ)‖ ≤ η2 for stochastic optimization with ξ ∼ p. Here ‖ · ‖ denotes the `2 norm. The constraint sets X ,Y are convex and bounded with diameter R. A2: f(x,y) has Lipschitz continuous gradients, i.e., there existsLx, Ly > 0 such that ‖∇xf(x1,y)− ∇xf(x2,y)‖ ≤ Lx‖x1−x2 ‖ for ∀x1,x2 ∈ X , and ‖∇yf(x1,y) − ∇yf(x2,y)‖ ≤ Ly‖x1−x2 ‖ and ‖∇yf(x,y1)−∇yf(x,y2)‖ ≤ Ly‖y1−y2 ‖ for ∀y1,y2 ∈ Y . We note that A1 and A2 are required for analyzing the convergence of ZO-Min-Max. They were used even for the analysis of first-order min-max optimization methods (Lu et al., 2019; Nouiehed et al., 2019) and first-order methods for nonconvex optimization with a single objective function (Chen et al., 2019; Ward et al., 2019). In A1, the strongly concavity of f(x,y) with respect to y holds for applications such as robust learning over multiple domains (Qian et al., 2019), and adversarial attack generation that will be introduced in Section 6. In A2, the assumption of smoothness\n(namely, Lipschitz continuous gradient) is required to quantify the descent of the alternating projected stochastic gradient descent-ascent method. Even for single-objective non-convex optimization, e.g., (Chen et al., 2019; Bernstein et al., 2018), A2 is needed in analysis. For clarity, we also summarize the problem and algorithmic parameters used in our convergence analysis in Table A1 of Appendix.\nWe measure the convergence of ZO-Min-Max by the proximal gradient (Lu et al., 2019; Ghadimi et al., 2016), G(x,y) = [\n(1/α) (x−projX (x−α∇xf(x,y))) (1/β) ( y−projY(y+β∇yf(x,y)) ) ] , (6) where (x,y) is a first-order stationary point of (1) iff ‖G(x,y)‖ = 0. In what follows, we delve into our convergence analysis. First, Lemma 2 shows the descent property of ZO-PGD at the x-minimization step in Algorithm 1.\nLemma 2. (Descent lemma in minimization) Under A1-A2, let (x(t),y(t)) be a sequence generated by Algorithm 1. When f(x,y) is black-box w.r.t. x, then we have following descent property w.r.t. x:\nE[f(x(t+1),y(t))] ≤ E[f(x(t),y(t))]− ( 1\nα − Lx 2\n) E‖∆(t+1)x ‖2 + ασ2x + Lxµ2 (7)\nwhere ∆(t)x := x(t)−x(t−1), and σ2x := σ2(Lx, µ, b, q, d) defined in (3).\nProof: See Appendix A.3.1.\nIt is clear from Lemma 2 that updating x leads to the reduced objective value when choosing a small learning rate α. However, ZO gradient estimation brings in additional errors in terms of ασ2x and Lxµ\n2, where the former is induced by the variance of gradient estimates in (3) and the latter is originated from bounding the distance between f and its smoothing version; see (25) in Appendix A.3.\nConvergence rate of ZO-Min-Max by performing PGA. We next investigate the convergence of ZO-Min-Max when FO PGA is used at the y-maximization step (Line 8 of Algorithm 1) for solving one-sided black-box optimization problems.\nLemma 3. (Descent lemma in maximization) Under A1-A2, let (x(t),y(t)) be a sequence generated by Algorithm 1 and define the potential function as\nP(x(t),y(t),∆(t)y ) = E[f(x(t),y(t))] + 4 + 4β2L2y − 7βγ\n2β2γ E‖∆(t)y ‖2, (8)\nwhere ∆(t)y := y(t)−y(t−1). When f(x,y) is black-box w.r.t. x and white-box w.r.t. y, then we have the following descent property w.r.t. y:\nP(x(t+1),y(t+1),∆(t+1)y ) ≤P(x(t+1),y(t),∆(t)y ) − ( 1\n2β − 2L2y γ\n) E‖∆(t+1)y ‖2 + ( 2\nγ2β + β 2\n) L2xE‖∆(t+1)x ‖2, (9)\nProof: See Appendix A.3.2.\nIt is shown from (9) that when β is small enough, then the term (1/(2β)− 2L2y/γ)E‖∆ (t+1) y ‖2 will give some descent of the potential function after performing PGA, while the last term in (9) will give some ascent to the potential function. However, such a quantity will be compensated by the descent of the objective function in the minimization step shown by Lemma 2. Combining Lemma 2 and Lemma 3, we obtain the convergence rate of ZO-Min-Max in Theorem 1.\nTheorem 1. Suppose that A1-A2 hold, the sequence (x(t),y(t)) over T iterations is generated by Algorithm 1 in which learning rates satisfy β < 1/(4L2y) and α ≤ min{1/Lx, 1/(Lx/2 + 2L2x/(γ\n2β) + βL2x/2)}. When f(x,y) is black-box w.r.t. x and white-box w.r.t. y, the convergence rate of ZO-Min-Max under a uniformly and randomly picked (x(r),y(r)) from {(x(t),y(t))}Tt=1 is given by\nE‖G(x(r),y(r))‖2 ≤ c ζ (P1 − f∗ − νR2) T + cασ2x ζ + cLxµ 2 ζ (10)\nwhere ζ is a constant independent on the parameters µ, b, q, d and T , Pt := P(x(t),y(t),∆(t)y ) given by (8), c = max{Lx + 3/α, 3/β}, ν = min{4 + 4β2L2y − 7βγ, 0}/(2β2γ), σ2x is variance bound of ZO gradient estimate given in (7), and f∗, R, γ, Lx and Ly have been defined in A1-A2.\nProof: See Appendix A.3.3.\nTo better interpret Theorem 1, we begin by clarifying the parameters involved in our convergence rate (10). First, the parameter ζ appears in the denominator of the derived convergence error. However, ζ has a non-trivial lower bound given appropriate learning rates α and β (see Remark 1 that we will show later). Second, the parameter c is inversely proportional to α and β. Thus, to guarantee the constant effect of the ratio c/ξ, it is better not to set these learning rates too small; see a specification in Remark 1-2. Third, the parameter ν is non-negative and appears in terms of −νR2, thus, it will not make convergence rate worse. Fourth, P1 is the initial value of the potential function (8). By setting an appropriate learning rate β (e.g., following Remark 2), P1 is then upper bounded by a constant determined by the initial value of the objective function, the distance of the first two updates, Lipschitz constant Ly and strongly concave parameter γ. We next provide Remarks 1-3 on Theorem 1.\nRemark 1. Recall that ζ = min{c1, c2} (Appendix B.2.3), where c1 = 1/(2β) − 2L2y/γ and c2 = 1 α − ( Lx 2 + 2L2x γ2β + βL2x 2 ). Given the fact that Lx and Ly are Lipschitz constants and γ is the strongly concavity constant, a proper lower bound of ζ thus relies on the choice of the learning rates α and β. By setting β ≤ γ8L2y and α ≤ 1/(Lx + 4L2x γ2β + βL 2 x), it is easy to verify that c1 ≥ 2L2y γ and c2 ≥ Lx2 + 2L2x γ2β + βL2x 2 ≥ Lx 2 + 2L2x γ . Thus, we obtain that ζ ≥ min{ 2L2y γ , 2L2x γ + Lx 2 }. This justifies that ζ has a non-trivial lower bound, which will not make the convergence error bound (10) vacuous (although the bound has not been optimized over α and β).\nRemark 2. It is not wise to set learning rates α and β to extremely small values since c is inversely proportional to α and β. Thus, we typically choose β = γ8L2y and α = 1/(Lx + 4L2x γ2β + βL 2 x) in Remark 1 to guarantee the constant effect of c/ζ. Remark 3. By setting µ ≤ min{1/ √ d, 1/ √ T}, we obtain σ2x = O(1/b+ d/q) from Lemma 1, and Theorem 1 implies that ZO-Min-Max yields O(1/T + 1/b+ d/q) convergence rate for one-sided black-box optimization. Compared to the FO rate O(1/T ) (Lu et al., 2019; Sanjabi et al., 2018a), ZO-Min-Max converges only to a neighborhood of stationary points with O(1/T ) rate, where the size of the neighborhood is determined by the mini-batch size b and the number of random direction vectors q used in ZO gradient estimation. It is also worth mentioning that such a stationary gap may exist even in the FO/ZO projected stochastic gradient descent for solving single-objective minimization problems (Ghadimi et al., 2016).\nAs shown in Remark 3, ZO-Min-Max could result in a stationary gap. A large mini-batch size b or number of random direction vectors q can improve its iteration complexity. However, this requires O(bq) times more function queries per iteration from (2). It implies the tradeoff between iteration complexity and function query complexity in ZO optimization.\nConvergence rate of ZO-Min-Max by performing ZO-PGA. We now focus on the convergence analysis of ZO-Min-Max when ZO PGA is used at the y-maximization step (Line 6 of Algorithm 1) for two-sided black-box optimization problems.\nLemma 4. (Descent lemma in maximization) Under A1-A2, let (x(t),y(t)) be a sequence generated by Algorithm 1 and define the potential function as\nP ′(x(t),y(t),∆(t)y ) = E[f(x(t),y(t))] + 4 + 4(3L2y + 2)β 2 − 7βγ β2γ E‖∆(t)y ‖2. (11)\nWhen function f(x,y) is black-box w.r.t. both x and y, we have the following descent w.r.t. y: P ′(x(t+1),y(t+1),∆(t+1)y ) ≤ P ′(x(t+1),y(t),∆(t)y )− ( 1\n2β −\n6L2y + 4\nγ\n) E‖∆(t+1)y ‖2\n+ ( 6L2x γ2β + 3βL2x 2 ) E‖∆(t+1)x ‖2 + 7β2γ2 + 28βγ + 12 βγ2 σ2y + βγ + 4 4β2γ µ2d2L2y, (12)\nwhere σ2y := σ 2(Ly, µ, b, q, d) given in (3).\nProof: See Appendix A.4.1.\nLemma 4 is analogous to Lemma 3 by taking into account the effect of ZO gradient estimate ∇̂yf(x,y) on the potential function (11). Such an effect is characterized by the terms related to σ2y and µ 2d2L2y in (12).\nTheorem 2. Suppose that A1-A2 hold, the sequence (x(t),y(t)) over T iterations is generated by Algorithm 1 in which learning rates satisfy β < γ/(4(3L2y + 2)) and α ≤ min{Lx, 1/(Lx/2 + 6L2x/(γ\n2β) + 3βL2x/2)}. When f(x,y) is black-box w.r.t. both x and y, the convergence rate of ZO-Min-Max under a uniformly and randomly picked (x(r),y(r)) from {(x(t),y(t))}Tt=1 is given by\nE‖G(x(r),y(r))‖2 ≤ c ζ′ P ′1 − f∗ − ν′R2 T + cα ζ′ σ2x + ( cb1 ζ′ + d2L2y ) µ2 + ( cb2 ζ′ + 2 ) σ2y,\nwhere ζ ′ is a constant independent on the parameters µ, b, q, d and T , P ′t := P ′(x(t),y(t),∆ (t) y ) in (11), c has been defined in (10), ν′ = min{4+4(3L2y+2)β\n2−7βγ,0} β2γ , b1 = Lx + d2L2y(4+βγ) 4β2γ and\nb2 = 7β2γ2+28βγ+12 βγ2 , σ 2 x and σ 2 y have been introduced in (7) and (12), and f ∗, R, γ, Lx and Ly have been defined in A1-A2.\nProof: See Appendix A.4.2.\nFollowing the similar argument in Remark 1 of Theorem 1, one can choose proper learning rates α and β to obtain valid lower bound on ζ ′. However, different from Theorem 1, the convergence error shown by Theorem 2 involves an additional error term related to σ2y and has worse dimension-dependence on the term related to µ2. The latter yields a more restricted choice of the smoothing parameter µ: we obtain O(1/T + 1/b+ d/q) convergence rate when µ ≤ 1/(d √ T )." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we evaluate the empirical performance of ZO-Min-Max on applications of adversarial exploration: 1) design of black-box ensemble attack against two neural networks Inception-V3 (Szegedy et al., 2016) and ResNet-50 (He et al., 2016) under ImageNet (Deng et al., 2009), and 2) design of black-box poisoning attack against a logistic regression model.\nBlack-box ensemble evasion attack via universal perturbation We consider the scenario in which the attacker generates adversarial examples against an ensemble of multiple classifiers and/or image classes (Liu et al., 2016; 2018a). More formally, let (z, l) denote a legitimate image z with the true class label l, and z′ := z + x denote an adversarial example, where x signifies the adversarial perturbation. Here the natural image z and the perturbed image z + x are normalized to [−0.5, 0.5]d. Considering I classes of images (each group of images corresponding to the same class li is denoted by Ωi) and J network models, the adversary is to find the universal perturbation x across I image classes and J models. The proposed attack problem is given by\nminimize x∈X maximize w∈W\nf1(x,w) := ∑J j=1 ∑I i=1 [wijFij (x; Ωi, li)]− λ‖w − 1/(IJ)‖ 2 2, (13)\nwhere x and w ∈ RIJ are optimization variables, and wij denotes the (i, j)th entry of w corresponding to the importance weight of attacking image class i under neural network model j. In problem (13), X denotes the perturbation constraint, e.g., X = {x | ‖x‖∞ ≤ , z + x ∈ [−0.5, 0.5]d,∀z ∈ ∪iΩi}, W = {w | 1Tw = 1,w ≥ 0}, Fij (x; Ωi, li) is the attack loss for attacking the set of images at class li under model j, and λ > 0 is a regularization parameter. We note that {Fij} in (13) are black-box functions w.r.t. x since the network models are blind to the adversary, which cannot perform back-propagation to obtain gradients. By contrast, it is a white-box and strongly concave function w.r.t. w once the function values of {Fij} are given. Thus, problem (13) belongs to the one-sided black-box optimization problem.\nIn our experiments, we consider J = 2 for Inception-V3 and ResNet-50, and I = 2 for two classes, each of which contains 20 images randomly selected from ImageNet (Deng et al., 2009). We also\nspecify the attack loss Fij in (13) as the C&W untargeted attack loss (Carlini & Wagner, 2017), Fij (x; Ωi, li) = (1/|Ωi|) ∑ z∈Ωi max{gj(z + x)li −max k 6=li gj(z + x)k, 0}, (14)\nwhere |Ωi| is the cardinality of the set Ωi, gj(z + x)k denotes the prediction score of class k given the input z + x using model j. In (13), we also set λ = 5. In Algorithm 1, we set α = 0.05, β = 0.01, q = 10 and µ = 5× 10−3, and use the full batch of image samples in attack generation. In experiment, we compare ZO-Min-Max with FO-Min-Max and ZO-Finite-Sum, where the former is the FO counterpart of Algorithm 1, and the latter is ZO-PSGD (Ghadimi et al., 2016) to minimize the finite-sum (average) loss rather than the worst-case (min-max) loss. The comparison with ZOFinite-Sum was motivated by the previous work on designing the adversarial perturbation against model ensembles (Liu et al., 2018a) in which the averaging attack loss over multiple models was considered. Note that although ZO-Finite-Sum consider a different loss function, it is a baseline from the perspective of attack generation.\nIn Figure 1, we demonstrate the empirical convergence of ZO-Min-Max to solve problem (13) from the stationary gap ‖G(x,y)‖2 given in (6) and the attack loss Fij under each model-class pair. In Figure 1-(a), the stationary gap decreases as the iteration increases, which is consistent with the reduction in the attack loss at each MjCi. Here M and C represents network model and image class, respectively. By comparing ZO-Min-Max with FO-Min-Max in Figure 1-(b), we see that the latter yields faster convergence than the former. However, FO-Min-Max has to access the full knowledge on the target neural network for computing the gradient of individual attack losses, yielding white-box attack rather than black-box attack. In Figure 1-(c), We also compare ZO-Min-Max with ZO-FiniteSum, where the latter minimizes the average loss ∑J j=1 ∑I i=1 Fij over all model-class combinations. As we can see, our approach significantly improves the worst-case attack performance (corresponding to M1C1). Here the worst case represents the most robust model-class pair against the attack. This suggests that ZO-Min-Max takes into account different robustness levels of model-class pairs through the design of importance weights w. This can also be evidenced from Figure A2 in Appendix: M1C1 has the largest weight while M2C2 corresponds to the smallest weight. In Figure A3 of Appendix, we further contrast the success or failure of attacking each image using the obtained universal perturbation x with the attacking difficulty (in terms of required iterations for successful adversarial example) of using per-image non-universal PGD attack (Madry et al., 2017b).\nBlack-box poisoning attack against logistic regression model Let D = {zi, ti}ni=1 denote the training dataset, among which n′ n samples are corrupted by a perturbation vector x, leading to poisoned training data zi + x towards breaking the training process and thus the prediction accuracy. The poisoning attack problem is then formulated as\nmaximize ‖x‖∞≤ minimize θ f2(x,θ) := Ftr(x,θ;D0) + λ‖θ‖22, (15)\nwhere x and θ are optimization variables, Ftr(x,θ;D0) denotes the training loss over model parameters θ at the presence of data poison x, and λ > 0 is a regularization parameter. Note that problem (15) can be written in the form of (1) with the objective function−f2(x,θ). Clearly, if Ftr is a convex\nloss (e.g., logistic regression or linear regression (Jagielski et al., 2018)), then−f2 is strongly concave in θ. Since the adversary has no knowledge on the training procedure and data, f2(x,θ) is a two-sided black-box function. We provide more details on problem (15) in Appendix C. In Algorithm 1, unless specified otherwise we choose b = 100, q = 5, α = 0.02, β = 0.05, and T = 50000. We report the empirical results averaged over 10 independent trials with random initialization. We compare our method with FO-Min-Max and the BO solver for robust optimization STABLEOPT (Bogunovic et al., 2018) in the data poisoning example of a relatively small problem size.\nIn Figure 2, we present the convergence performance of ZO-Min-Max to generate the data poisoning attack and validate its attack performance in terms of testing accuracy of the logistic regression model trained on the poisoned dataset. Unless specified otherwise, we set 15% poisoning ratio and λ = 10−3 for problem (15). We examine the sensitivity of the regularization parameter λ in Figure A4. Figure 2-(a) shows the stationary gap defined in (6) obtained by ZO-Min-Max under different number of random direction vectors while estimating gradients (2). As we can see, a moderate choice of q (e.g., q ≥ 5 in our example) is sufficient to achieve near-optimal solution compared with FO-Min-Max. However, it suffers from a convergence bias due to the presence of stochastic sampling, consistent with Theorem 1 and 2.\nFigure 2-(b) demonstrates the testing accuracy (against iterations) of the model learnt from poisoned training data, where the poisoning attack is generated by ZO-Min-Max (black-box attack) and FO-Min-Max (white-box attack). As we can see, ZO-Min-Max yields promising attacking performance comparable to FO-Min-Max. We can also see that by contrast with the testing accuracy of the clean model (94% without poison), the poisoning attack eventually reduces the testing accuracy (below 70%). Furthermore, in Figure 2-(c), we present the testing accuracy of the learnt model under different data poisoning ratios. As we can see, only 5% poisoned training data can significantly break the testing accuracy of a well-trained model. In Figure 3, we compare ZO-Min-Max with STABLEOPT (Bogunovic et al., 2018) in terms of testing accuracy versus computation time. Fol-\nlowing (Bogunovic et al., 2018), we present the best accuracy achieved up to the current time step. We observe that STABLEOPT is has a poorer scalability while our method reaches a data poisoning attack that induces much worse testing accuracy within 500 seconds." }, { "heading": "7 CONCLUSION", "text": "This paper addresses black-box robust optimization problems given a finite number of function evaluations. In particular, we present ZO-Min-Max: a framework of alternating, randomized gradient estimation based ZO optimization algorithm to find a first-order stationary solution to the black-box min-max problem. Under mild assumptions, ZO-Min-Max enjoys a sub-linear convergence rate. It scales to dimensions that are infeasible for recent robust solvers based on Bayesian optimization. Furthermore, we experimentally demonstrate the potential application of the framework on real-world scenarios, viz. black-box evasion and data poisoning attacks." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A DETAILED CONVERGENCE ANALYSIS", "text": "" }, { "heading": "A.1 TABLE OF PARAMETERS", "text": "In Table A1, we summarize the problem and algorithmic parameters used in our convergence analysis.\nTable A1: Summary of problem and algorithmic parameters and their descriptions.\nparameter description d # of optimization variables b mini-batch size q # of random direction vectors used in ZO gradient estimation α learning rate for ZO-PGD β learning rate for ZO-PGA γ strongly concavity parameter of f(x,y) with respect to y η upper bound on the gradient norm, implying Lipschitz continuity\nLx, Ly Lipschitz continuous gradient constant of f(x,y) with respect to x and y respectively R diameter of the compact convex set X or Y f∗ lower bound on the function value, implying feasibility σ2x, σ 2 y variance of ZO gradient estimator for variable x and y respectively" }, { "heading": "A.2 PROOF OF LEMMA 1", "text": "Before going into the proof, let’s review some preliminaries and give some definitions. Define hµ(x, ξ) to be the smoothed version of h(x, ξ) and since ξ models a subsampling process over a finite number of candidate functions, we can further have hµ(x) , Eξ[hµ(x, ξ)] and∇xhµ(x) = Eξ[∇xhµ(x, ξ)] Recall that in the finite sum setting when ξj parameterizes the jth function, the gradient estimator is given by\n∇̂xh(x) = 1\nbq ∑ j∈I q∑ i=1 d[h(x + µui; ξj)− h(x; ξj)] µ ui. (16)\nwhere I is a set with b elements, containing the indices of functions selected for gradient evaluation. From standard result of the zeroth order gradient estimator, we know\nEI [ Eui,i∈[q] [ ∇̂xh(x) ] ∣∣I] = EI 1 b ∑ j∈I ∇xfµ(x, ξj) = ∇xhµ(x). (17) Now let’s go into the proof. First, we have\nE [ ‖∇̂xh(x)−∇xhµ(x)‖22 ] =EI Eui,i∈[q] ∥∥∥∥∥∥∇̂xh(x)− 1b ∑ j∈I ∇xfµ(x, ξj) + 1 b ∑ j∈I ∇xfµ(x, ξj)−∇xhµ(x) ∥∥∥∥∥∥ 2\n2\n∣∣∣∣I \n≤2EI Eui,i∈[q] ∥∥∥∥∥∥∇̂xh(x)− 1b ∑ j∈I ∇xfµ(x, ξj) ∥∥∥∥∥∥ 2\n2\n+ ∥∥∥∥∥∥1b ∑ j∈I ∇xfµ(x, ξj)−∇xhµ(x) ∥∥∥∥∥∥ 2\n2\n∣∣∣∣I .\n(18)\nFurther, by definition, given I , ∇̂xh(x) is the average of ZO gradient estimates under q i.i.d. random directions, each of which has the mean 1b ∑ j∈I ∇xfµ(x, ξj). Thus for the first term at the right-\nhand-side (RHS) of the above inequality, we have\nEui,i∈[q] ∥∥∥∥∥∥∇̂xh(x)− 1b ∑ j∈I ∇xfµ(x, ξj) ∥∥∥∥∥∥ 2\n2\n∣∣∣∣I ≤1\nq 2d ∥∥∥∥∥∥1b ∑ j∈I ∇xf(x, ξj) ∥∥∥∥∥∥ 2 + µ2L2hd 2 2 ≤1 q ( 2dη2 + µ2L2hd 2 2 ) (19)\nwhere the first inequality is by the standard bound of the variance of zeroth order estimator and the second inequality is by the assumption that ‖∇xh(x; ξ)‖2 ≤ η2 and thus ‖ 1b ∑ j∈I ∇xf(x, ξj)‖2 ≤ η2.In addition, we have\nEI Eui,i∈[q] ∥∥∥∥∥∥1b ∑ j∈I ∇xfµ(x, ξj)−∇xhµ(x) ∥∥∥∥∥∥ 2\n2\n∣∣∣∣I \n=EI ∥∥∥∥∥∥1b ∑ j∈I ∇xfµ(x, ξj)−∇xhµ(x) ∥∥∥∥∥∥ 2\n2 = 1 b Eξ [ ‖∇xfµ(x, ξ)−∇xhµ(x)‖22 ] ≤ η 2 b (20)\nwhere the second equality is because ξj are i.i.d. draws from the same distribution as ξ and E[∇xfµ(x, ξ)] = ∇xhµ(x), the last inequality is because ‖∇xfµ(x, ξ)‖22 ≤ η2 by assumption. Substituting (19) and (20) into (18) finishes the proof." }, { "heading": "A.3 CONVERGENCE ANALYSIS OF ZO-MIN-MAX BY PERFORMING PGA", "text": "In this section, we will provide the details of the proofs. Before proceeding, we have the following illustration, which will be useful in the proof.\nThe order of taking expectation: Since iterates x(t),y(t),∀t are random variables, we need to define\nF (t) = {x(t),y(t),x(t−1),y(t−1), . . . ,x(1),y(1)} (21)\nas the history of the iterates. Throughout the theoretical analysis, taking expectation means that we take expectation over random variable at the tth iteration conditioned on F (t−1) and then take expectation over F (t−1).\nSubproblem: Also, it is worthy noting that performing (4) and (5) are equivalent to the following optimization problem:\nx(t) = min x∈X\n〈 ∇̂xf(x(t−1),y(t−1)),x−x(t−1) 〉 + 1\n2α ‖x−x(t−1) ‖2, (22)\ny(t) = max y∈Y\n〈 ∇̂yf(x(t),y(t−1)),y−y(t−1) 〉 − 1\n2β ‖y−y(t−1) ‖2. (23)\nWhen f(x,y) is white-box w.r.t. y, (23) becomes\ny(t) = max y∈Y\n〈 ∇yf(x(t),y(t−1)),y−y(t−1) 〉 − 1\n2β ‖y−y(t−1) ‖2. (24)\nIn the proof of ZO-Min-Max, we will use the optimality condition of these two problems to derive the descent lemmas.\nRelationship with smoothing function We denote by fµ,x(x,y) the smoothing version of f w.r.t. x with parameter µ > 0. The similar definition holds for fµ,y(x,y). By taking fµ,x(x,y) as an example, under A2 f and fµ,x has the following relationship (Gao et al., 2014, Lemma 4.1):\n|fµ,x(x,y)− f(x,y))| ≤ Lxµ\n2\n2 and ‖∇xfµ,x(x,y)−∇xf(x,y)‖22 ≤ µ2d2L2x 4 , (25)\n|fµ,y(x,y)− f(x,y))| ≤ Lyµ\n2\n2 and ‖∇yfµ,y(x,y)−∇yf(x,y)‖22 ≤ µ2d2L2y 4 . (26)\nFirst, we will show the descent lemma in minimization as follows." }, { "heading": "A.3.1 PROOF OF LEMMA 2", "text": "Proof: Since f(x,y) has Lx Lipschtiz continuous gradients with respect to x, we have\nfµ(x (t+1),y(t)) ≤fµ(x(t),y(t)) + 〈∇xfµ(x(t),y(t)),x(t+1)−x(t)〉+ Lx 2 ‖x(t+1)−x(t) ‖2\n=fµ(x (t),y(t)) + 〈∇̂xf(x(t),y(t)),x(t+1)−x(t)〉+ Lx 2 ‖x(t+1)−x(t) ‖2\n+ 〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)),x(t+1)−x(t)〉. (27)\nRecall that\nx(t+1) = projX (x (t) − α∇̂xf(x(t),y(t))), (28)\nFrom the optimality condition of x-subproblem (22), we have\n〈∇̂xf(x(t),y(t)),x(t+1)−x(t)〉 ≤ − 1\nα ‖x(t+1)−x(t) ‖2. (29)\nHere we use the fact that the optimality condition of problem (22) at the solution x(t+1) yields 〈∇̂xf(x(t),y(t)) + (x(t+1)−x(t))/α,x(t+1)−x〉 ≤ 0 for any x ∈ X . By setting x = x(t), we obtain (29).\nIn addition, we define another iterate generated by∇xfµ(x(t),y(t))\nx̂(t+1) = projX (x (t) − α∇xfµ(x(t),y(t))). (30)\nThen, we can have\n〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)),x(t+1)−x(t)〉 =〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)),x(t+1)−x(t)−(x̂(t+1) − x(t))〉\n+ 〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)), x̂(t+1) − x(t)〉. (31)\nDue to the fact that Eu[∇̂xf(x(t),y(t))] = ∇xfµ(x(t),y(t)), we further have\nEu[〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)), x̂(t+1) − x(t)〉] = 0. (32)\nFinally, we also have\n〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)),x(t+1)−x(t)−(x̂(t+1) − x(t))〉\n≤α 2 ‖∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t))‖2 + 1 2α ‖x(t+1)−x(t)−(x̂(t+1) − x(t))‖2\n≤α‖∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t))‖2 (33) where the first inequality is due to Young’s inequality, the second inequality is due to nonexpansiveness of the projection operator. Thus\nEu[〈∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t)),x(t+1)−x(t)−(x̂(t+1) − x(t))〉] ≤Eu[α‖∇xfµ(x(t),y(t))− ∇̂xf(x(t),y(t))‖2] ≤ ασ2x (34)\nwhere σ2x := σ 2(Lx, b, q, d) which was defined in (3).\nCombining all above, we have E[fµ(x(t+1),y(t))] ≤E[fµ(x(t),y(t))]− ( 1\nα − Lx 2\n) ‖x(t+1)−x(t) ‖2 + ασ2, (35)\nand we request α ≤ 1/Lx, which completes the proof.\nUsing |fµ,x(x,y)− f(x,y))| ≤ Lxµ 2\n2 , we can get\nE[f(x(t+1),y(t))]− Lxµ 2\n2 ≤ E[fµ(x(t+1),y(t))] ≤ E[f(x(t+1),y(t))] +\nLxµ 2\n2 , (36)\nso we are able to obtain from (3) E[f(x(t+1),y(t))] ≤ E[f(x(t),y(t))]− ( 1\nα − Lx 2\n) ‖x(t+1)−x(t) ‖2 + ασ2x + Lxµ2. (37)\nCorollary 1.\nE 〈 ∇̂f(x(t),y(t−1))−∇fµ(x(t),y(t−1)),y(t)−y(t−1) 〉 ≤ βσ2y (38)\nσ2y := σ 2(Ly, b, q, d) which was defined in (3)." }, { "heading": "Proof:", "text": "Define\nỹ(t) = projY(y (t) − β∇yfµ(x(t),y(t−1))), (39)\nwe have\n〈∇yfµ(x(t),y(t−1))− ∇̂xf(x(t),y(t−1)),y(t)−y(t−1)〉 =〈∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1)),y(t)−y(t−1)−(ỹ(t) − y(t−1))〉\n+ 〈∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1)), ỹ(t) − y(t−1)〉. (40)\nDue to the fact that Eu[∇̂yf(x(t),y(t−1))] = ∇yfµ(x(t),y(t−1)), we further have\nEu[〈∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1)), ỹ(t) − y(t−1)〉] = 0. (41)\nFinally, we also have\nEu[〈∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1)),y(t)−y(t−1)−(ỹ(t) − y(t−1))〉]\n≤Eu[ β\n2 ‖〈∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1))‖2 +\n1\n2β ‖y(t)−y(t−1)−(ỹ(t) − y(t−1))‖2]\n≤Eu[β‖∇yfµ(x(t),y(t−1))− ∇̂yf(x(t),y(t−1))‖2] ≤ βσ2y (42)\nwhere σ2y := σ 2(Ly, b, q, d) which was defined in (3).\nNext, before showing the proof of Lemma 3, we need the following lemma to show the recurrence of the size of the successive difference between two iterations. Lemma 5. Under assumption 1, assume iterates x(t),y(t) generated by algorithm 1. When f(x(t),y) is white-box, we have\n2 β2γ E‖y(t+1)−y(t) ‖2 − 2 β2γ E‖y(t)−y(t−1) ‖2 ≤ 2L\n2 x\nβγ2 E‖x(t+1)−x(t) ‖2\n+ 2\nβ E‖y(t+1)−y(t) ‖2 −\n( 4\nβ − 2L2y γ\n) E‖y(t)−y(t−1) ‖2. (43)\nProof: from the optimality condition of y-subproblem (24) at iteration t and t − 1, we have the following two inequalities:\n−〈∇yf(x(t+1),y(t))− 1\nβ (y(t+1)−y(t)),y(t+1)−y(t)〉 ≤0, (44)\n〈∇yf(x(t),y(t−1))− 1\nβ (y(t)−y(t−1)),y(t+1)−y(t)〉 ≤0. (45)\nAdding the above inequalities, we can get\n1 β 〈v(t+1),y(t+1)−y(t)〉 ≤\n〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 + 〈 ∇yf(x(t),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t) 〉 (46)\nwhere v(t+1) = y(t+1)−y(t)−(y(t)−y(t−1)). According to the quadrilateral indentity, we know〈\nv(t+1),y(t+1)−y(t) 〉 = 1\n2\n( ‖y(t+1)−y(t) ‖2 + ‖v(t+1) ‖2 − ‖y(t)−y(t−1) ‖2 ) . (47)\nBased on the definition of v(t+1), we substituting (47) into (46), which gives\n1 2β ‖y(t+1)−y(t) ‖2 ≤ 1 2β ‖y(t)−y(t−1) ‖2 − 1 2β ‖v(t+1) ‖2\n+ 〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 + 〈 ∇yf(x(t),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t) 〉 (48)\n(a) ≤ 1 2β ‖y(t)−y(t−1) ‖2 +\n〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 + βL2y\n2 ‖y(t)−y(t−1) ‖2 − γ‖y(t)−y(t−1) ‖2\n(b) ≤ 1 2β ‖y(t)−y(t−1) ‖2 + γ 2 ‖y(t+1)−y(t) ‖2\n+ L2x 2γ ‖x(t+1)−x(t) ‖2 − (γ − βL2y 2 )‖y(t)−y(t−1) ‖2 (49)\nwhere in (a) we use the strong concavity of function f(x,y) in y (with parameter γ > 0) and Young’s inequality, i.e.,\n〈∇yf(x(t),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t)〉 =〈∇yf(x(t),y(t))−∇yf(x(t),y(t−1)),v(t+1) +y(t)−y(t−1)〉\n≤ βL2y\n2 ‖y(t)−y(t−1) ‖2 + 1 2β ‖v(t+1) ‖2 − γ‖y(t)−y(t−1) ‖2 (50)\nand in (b) we apply the Young’s inequality, i.e.,〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 ≤ L 2 x\n2γ ‖x(t+1)−x(t) ‖2+γ 2 ‖y(t+1)−y(t) ‖2.\n(51) Therefore, we have\n1 2β ‖y(t+1)−y(t) ‖2 ≤ 1 2β ‖y(t)−y(t−1) ‖2 + L\n2 x\n2γ ‖x(t+1)−x(t) ‖2\n+ γ\n2 ‖y(t+1)−y(t) ‖2 −\n( γ −\nβL2y 2\n) ‖y(t)−y(t−1) ‖2, (52)\nwhich implies 2\nβ2γ ‖y(t+1)−y(t) ‖2 ≤ 2 β2γ ‖y(t)−y(t−1) ‖2 + 2L\n2 x\nβγ2 ‖x(t+1)−x(t) ‖2\n+ 2\nβ ‖y(t+1)−y(t) ‖2 −\n( 4\nβ − 2L2y γ\n) ‖y(t)−y(t−1) ‖2. (53)\nBy taking the expectation on both sides of (53), we can get the results of Lemma 5.\nLemma 5 basically gives the recursion of ‖∆(t)y ‖2. It can be observed that term (4/β−2L2y/γ)‖∆ (t) y ‖ provides the descent of the recursion when β is small enough, which will take an important role in the proof of Lemma 3 when we quantify the descent in maximization.\nThen, we can quantify the descent of the objective value by the following descent lemma." }, { "heading": "A.3.2 PROOF OF LEMMA 3", "text": "Proof: let f ′(x(t+1),y(t+1)) = f(x(t+1),y(t+1))−1(y(t+1)) and 1(y) denote the indicator function with respect to the constraint of y. From the optimality condition of sub-problem y in (23), we have\n∇yf(x(t+1),y(t))− 1\nβ (y(t+1)−y(t))− ξ(t+1) = 0 (54)\nwhere ξ(t) denote the subgradient of 1(y(t)). Since function f ′(x,y) is concave with respect to y, we have f ′(x(t+1),y(t+1))− f ′(x(t+1),y(t)) ≤ 〈∇yf(x(t+1),y(t)),y(t+1)−y(t)〉 − 〈ξ(t),y(t+1)−y(t)〉\n(a) =\n1 β ‖y(t+1)−y(t) ‖2 − 〈ξ(t) − ξ(t+1),y(t+1)−y(t)〉\n= 1\nβ ‖y(t+1)−y(t) ‖2 +\n〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t) 〉 − 1 β 〈 v(t+1),y(t+1)−y(t) 〉 (55)\nwhere in (a) we use ξ(t+1) = ∇yf(x(t+1),y(t))− 1β (y (t+1)−y(t)) . The last two terms of (55) is the same as the RHS of (46). We can apply the similar steps from (48) to (49). To be more specific, the derivations are shown as follows: First, we know\nf ′(x(t+1),y(t+1))− f ′(x(t+1),y(t)) ≤ 1 β ‖y(t+1)−y(t) ‖2\n+ 〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t) 〉 − 1 β 〈 v(t+1),y(t+1)−y(t) 〉 . (56)\nThen, we move term 1/β〈v(t+1),y(t+1)−y(t)〉 to RHS of (55) and have f(x(t+1),y(t+1))− f(x(t+1),y(t))\n≤ 1 2β ‖y(t+1)−y(t) ‖2 + 1 2β ‖y(t)−y(t−1) ‖2 − 1 2β ‖v(t+1) ‖2\n+ 〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 + 〈 ∇yf(x(t),y(t))−∇yf(x(t),y(t−1)),y(t+1)−y(t)\n〉 ≤ 1\n2β ‖y(t+1)−y(t) ‖2 +\n〈 ∇yf(x(t+1),y(t))−∇yf(x(t),y(t)),y(t+1)−y(t) 〉 + βL2y\n2 ‖y(t)−y(t−1) ‖2 − γ‖y(t)−y(t−1) ‖2\n(a) ≤ 1 β ‖y(t+1)−y(t) ‖2 + 1 2β ‖y(t)−y(t−1) ‖2\n+ βL2x\n2 ‖x(t+1)−x(t) ‖2 − (γ − βL2y 2 )‖y(t)−y(t−1) ‖2 (57)\nwhere in (a) we use\n〈∇yf(x(t+1),y(t))−∇yf(x(t),y(t))〉 ≤ βL2x 2 ‖x(t+1)−x(t) ‖2 + 1 2β ‖y(t+1)−y(t) ‖2 (58)\nwhich is different from (51); also y(t),y(t+1) ∈ Y so have f ′(x(t+1),y(t+1)) = f(x(t+1),y(t+1)) and f ′(x(t+1),y(t)) = f(x(t+1),y(t)).\nCombing (53), we have\nf(x(t+1),y(t+1)) +\n( 2\nβ2γ +\n1\n2β\n) ‖y(t+1)−y(t) ‖2 − 4 ( 1\nβ − L2y 2γ\n) ‖y(t+1)−y(t) ‖2\n≤f(x(t+1),y(t)) + ( 2\nβ2γ +\n1\n2β\n) ‖y(t)−y(t−1) ‖2 − 4 ( 1\nβ − L2y 2γ\n) ‖y(t)−y(t−1) ‖2\n−\n( 1\n2β − 2L2y γ\n) ‖y(t+1)−y(t) ‖2 + ( 2L2x γ2β + βL2x 2 ) ‖x(t+1)−x(t) ‖2. (59)\nBy taking the expectation on both sides of (53), we can get the results of Lemma 3.\nNext, we use the following lemma to show the descent of the objective value after solving xsubproblem by (4)." }, { "heading": "A.3.3 PROOF OF THEOREM 1", "text": "Proof:\nFrom Lemma 3, we know E[f(x(t+1),y(t+1))] + ( 2\nβ2γ +\n1\n2β\n) E[‖y(t+1)−y(t) ‖2]\n− 4\n( 1\nβ − L2y 2γ\n) E[‖y(t+1)−y(t) ‖2] ≤ E[f(x(t+1),y(t))]\n+\n( 2\nβ2γ +\n1\n2β\n) E[‖y(t)−y(t−1) ‖2]− 4 ( 1\nβ − L2y 2γ\n) E[‖y(t)−y(t−1) ‖2]\n−\n( 1\n2β − 2L2y γ\n) E[‖y(t+1)−y(t) ‖2] + ( 2L2x γ2β + βL2x 2 ) E[‖x(t+1)−x(t) ‖2]. (60)\nCombining Lemma 2, we have E[f(x(t+1),y(t+1))] + ( 2\nβ2γ +\n1\n2β\n) E [ ‖y(t+1)−y(t) ‖2 ] − 4 ( 1\nβ − L2y 2γ\n) E [ ‖y(t+1)−y(t) ‖2 ] ≤ E[f(x(t),y(t))] + ( 2\nβ2γ +\n1\n2β\n) E [ ‖y(t)−y(t−1) ‖2 ] − 4 ( 1\nβ − L2y 2γ\n) E [ ‖y(t)−y(t−1) ‖2 ] − ( 1\n2β − 2L2y γ ) ︸ ︷︷ ︸\nc1\nE [ ‖y(t+1)−y(t) ‖2 ]\n− ( 1 α − ( Lx 2 + 2L2x γ2β + βL2x 2 )) ︸ ︷︷ ︸\nc2\nE [ ‖x(t+1)−x(t) ‖2 ] + ασ2x + Lxµ 2. (61)\nIf β < γ\n4L2y and α < 1 Lx 2 + 2L2x γ2β + βL2x 2 , (62)\nthen we have that there exist positive constants c1 and c2 such that\nP(x(t+1),y(t+1),∆(t+1)y )− P(x(t),y(t),∆(t)y ) ≤− c1E [ ‖y(t+1)−y(t) ‖2 ] − c2E [ ‖x(t+1)−x(t) ‖2 ] + ασ2x + Lxµ 2\n≤− ζ ( E [ ‖y(t+1)−y(t) ‖2 ] + E [ ‖x(t+1)−x(t) ‖2 ]) + ασ2x + Lxµ 2 (63)\nwhere ζ = min{c1, c2}. From (6), we can have\n‖G(x(t),y(t))‖\n≤ 1 α ‖x(t+1)−x(t) ‖+ 1 α ‖x(t+1)−projX (x(t)−α∇xf(x(t),y(t)))‖+ 1 β ‖y(t+1)−y(t) ‖\n+ 1\nβ ‖y(t+1)−projY(y(t) +β∇yf(x(t),y(t))‖\n(a) ≤ 1 α ‖x(t+1)−x(t) ‖\n+ 1\nα ‖projX (x(t+1)−α(∇xf(x(t),y(t)) +\n1 α (x(t+1)−x(t)))− projX (x(t)−α∇xf(x(t),y(t)))‖\n+ 1\nβ ‖y(t+1)−y(t) ‖\n+ 1\nβ ‖projY(y(t+1) +β(∇yf(x(t+1),y(t))−\n1 β (y(t+1)−y(t)))− projY(y(t) +β∇yf(x(t),y(t)))‖\n(b) ≤ 3 α ‖x(t+1)−x(t) ‖+ ‖∇yf(x(t+1),y(t)))−∇yf(x(t),y(t)))‖+ 3 β ‖y(t+1)−y(t) ‖\n(c) ≤ ( 3\nα + Lx\n) ‖x(t+1)−x(t) ‖+ 3\nβ ‖y(t+1)−y(t) ‖\nwhere in (a) we use x(t+1) = projX (x (t+1)−α∇f(x(t+1),y(t))− (x(t+1)−x(t))); in (b) we use nonexpansiveness of the projection operator; in (c) we apply the Lipschitz continuous of function f(x,y) with respect to x and y under assumption A2. Therefore, we can know that there exist a constant c = max{Lx + 3α , 3 β } such that\n‖G(x(t),y(t))‖2 ≤ c ( ‖x(t+1)−x(t) ‖2 + ‖y(t+1)−y(t) ‖2 ) . (64)\nAfter applying the telescope sum on (63) and taking expectation over (64), we have\n1\nT T∑ t=1 E‖G(x(t),y(t))‖2 ≤ c ζ ( P1 − PT+1 T + ασ2x + Lxµ 2 ) . (65)\nRecall from A1 that f ≥ f∗ and Y is bounded with diameter R, therefore, Pt given by (8) yields\nPt ≥ f∗ +\n( min{4 + 4β2L2y − 7βγ, 0}\n2β2γ\n) R2, ∀t. (66)\nAnd let (x(r),y(r)) be uniformly and randomly picked from {(x(t),y(t))}Tt=1, based on (65) and (66), we obtain\nEr[E‖G(x(r),y(r))‖2] = 1\nT T∑ t=1 E‖G(x(t),y(t))‖2 ≤ c ζ ( P1 − f∗ − νR2 T + ασ2x + Lxµ 2 ) ,\n(67)\nwhere recall that ζ = min{c1, c2}, c = max{Lx + 3α , 3 β } and ν = min{4+4β2L2y−7βγ,0} 2β2γ .\nThe proof is now complete." }, { "heading": "A.4 CONVERGENCE ANALYSIS OF ZO-MIN-MAX BY PERFORMING ZO-PGA", "text": "Before showing the proof of Lemma 4, we first give the following lemma regarding to recursion of the difference between two successive iterates of variable y.\nLemma 6. Under assumption 1, assume iterates x(t),y(t) generated by algorithm 1. When function f(x(t),y) is black-box, we have\n2 β2γ E‖y(t+1)−y(t) ‖2 ≤ 2 β2γ E‖y(t)−y(t−1) ‖2 + 2 β E‖y(t+1)−y(t) ‖2\n+ 6L2y βγ2 E‖x(t+1)−x(t) ‖2 −\n( 4\nβ −\n6L2y + 4\nγ\n) E‖y(t)−y(t−1) ‖2\n+ 4σ2y βγ\n( 3\nγ + 4β ) + µ2d2L2y β2γ . (68)\nFrom the optimality condition of y-subproblem in (23) at iteration t and t− 1, we have\n− 〈 ∇̂yf(x(t+1),y(t))− 1\nβ (y(t+1)−y(t)),y(t+1)−y(t) 〉 ≤0, (69)〈\n∇̂yf(x(t),y(t−1))− 1\nβ (y(t)−y(t−1)),y(t+1)−y(t)\n〉 ≤0. (70)\nAdding the above inequalities and applying the definition of v(t+1), we can get\n1 β 〈v(t+1),y(t+1)−y(t)〉 ≤\n〈 ∇̂yf(x(t+1),y(t))− ∇̂yf(x(t),y(t)),y(t+1)−y(t) 〉 ︸ ︷︷ ︸\nI + 〈 ∇̂yf(x(t),y(t))− ∇̂yf(x(t),y(t−1)),y(t+1)−y(t) 〉 ︸ ︷︷ ︸\nII\n. (71)\nNext, we will bound E[I] and E[II] separably as follows.\nFirst, we give an upper bound of E[I] as the following,\nE 〈 ∇̂yf(x(t+1),y(t))− ∇̂yf(x(t),y(t)),y(t+1)−y(t) 〉 ≤ 3\n2γ E‖∇̂yf(x(t+1),y(t))−∇yfµ,y(x(t+1),y(t))‖2 +\nγ 6 E‖y(t+1)−y(t) ‖2\n+ 3\n2γ E‖∇yfµ,y(x(t+1),y(t))−∇yfµ,y(x(t),y(t))‖2 +\nγ 6 E‖y(t+1)−y(t) ‖2\n+ 3\n2γ E‖∇yfµ,y(x(t),y(t))− ∇̂fy(x(t),y(t))‖2 +\nγ 6 E‖y(t+1)−y(t) ‖2\n≤ 3σ2y γ + 3L2x 2γ E‖x(t+1)−x(t) ‖2 + γ 2 E‖y(t+1)−y(t) ‖2 (72)\nwhere Lemma 1 is used.\nSecond, we need to give an upper bound of E[II] as follows:\n〈∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),y(t+1)−y(t)〉 =〈∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),v(t+1) +y(t)−y(t−1)〉\n= 〈 ∇f(x(t),y(t))−∇f(x(t),y(t−1)),y(t)−y(t−1) 〉 + 〈 ∇fµ,y(x(t),y(t))−∇f(x(t),y(t)),y(t)−y(t−1)\n〉 + 〈 ∇̂f(x(t),y(t))−∇fµ,y(x(t),y(t)),y(t)−y(t−1)\n〉 − 〈 ∇fµ,y(x(t),y(t−1))−∇f(x(t),y(t−1)),y(t)−y(t−1)\n〉 − 〈 ∇̂f(x(t),y(t−1))−∇fµ,y(x(t),y(t−1)),y(t)−y(t−1)\n〉 + 〈∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),v(t+1)〉.\nNext, we take expectation on both sides of the above equality and obtain\nE〈∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),y(t+1)−y(t)〉 (a) ≤ ( 3βL2y\n2 + β\n) ‖y(t)−y(t−1) ‖2 + 1\n2β ‖v(t+1) ‖2 − γ‖y(t)−y(t−1) ‖2\n+ µ2d2L2y\n4β + 4βσ2y (73)\nwhere in (a) we use the fact that 1) γ-strong concavity of f with respect to y:〈 ∇f(x(t),y(t))−∇f(x(t),y(t−1)),y(t)−y(t−1) 〉 ≤ −γ‖y(t)−y(t−1) ‖2; (74)\nand the facts that 2) smoothing property (26) and Young’s inequality\nE 〈 ∇fµ,y(x(t),y(t))−∇f(x(t),y(t)),y(t)−y(t−1) 〉 ≤ µ2d2L2y\n8β + β 2 ‖y(t)−y(t−1) ‖2; (75)\nand the fact that 3) the ZO estimator is unbiased according to Lemma 1 E 〈 ∇̂f(x(t),y(t))−∇fµ,y(x(t),y(t)),y(t)−y(t−1) 〉 = 0; (76)\nand E 〈 ∇fµ,y(x(t),y(t−1))−∇f(x(t),y(t−1)),y(t)−y(t−1) 〉 ≤ µ2d2L2y\n8β + β 2 ‖y(t)−y(t−1) ‖2;\n(77) and from Corollary 1 we have\nE 〈 ∇̂f(x(t),y(t−1))−∇fµ,y(x(t),y(t−1)),y(t)−y(t−1) 〉 ≤ βσ2y; (78)\nand\nE〈∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),v(t+1)〉\n≤3β 2 E‖∇fµ,y(x(t),y(t))− ∇̂f(x(t),y(t))‖2 + 1 6β ‖v(t+1) ‖2\n+ 3β\n2 E‖∇fµ,y(x(t),y(t))−∇fµ,y(x(t),y(t−1))‖2 +\n1\n6β ‖v(t+1) ‖2\n+ 3β\n2 E‖∇fµ,y(x(t),y(t−1))− ∇̂f(x(t),y(t−1))‖2 +\n1\n6β ‖v(t+1) ‖2\n≤3βσ2y + 1\n2β ‖v(t+1) ‖2 + 3βL2y 2 ‖y(t)−y(t−1) ‖2. (79)\nThen, from (71), we can have\n1 2β E‖y(t+1)−y(t) ‖2 ≤ 1 2β E‖y(t)−y(t−1) ‖2 − 1 2β E‖v(t+1) ‖2\n+ 3σ2y γ + 3L2x 2γ E‖x(t+1)−x(t) ‖2 + γ 2 E‖y(t+1)−y(t) ‖2\n+ 〈 ∇̂f(x(t),y(t))− ∇̂f(x(t),y(t−1)),y(t+1)−y(t) 〉 ≤ 1\n2β E‖y(t)−y(t−1) ‖2 + γ 2 E‖y(t+1)−y(t) ‖2\n+ 3L2y 2γ E‖x(t+1)−x(t) ‖2 −\n( γ − ( 3βL2y\n2 + β\n)) E‖y(t)−y(t−1) ‖2\n+ 3σ2y γ + 4βσ2y + µ2d2L2y 4β , (80)\nwhich implies\n2 β2γ E‖y(t+1)−y(t) ‖2 ≤ 2 β2γ E‖y(t)−y(t−1) ‖2 + 2 β E‖y(t+1)−y(t) ‖2\n+ 6L2y βγ2 E‖x(t+1)−x(t) ‖2 −\n( 4\nβ −\n6L2y + 4\nγ\n) E‖y(t)−y(t−1) ‖2\n+ 4σ2y βγ\n( 3\nγ + 4β ) + µ2d2L2y β2γ . (81)" }, { "heading": "A.4.1 PROOF OF LEMMA 4", "text": "Proof: Similarly as A.3.2, let f ′(x(t+1),y(t+1)) = f(x(t+1),y(t+1))− 1(y(t+1)), 1(·) denotes the indicator function and ξ(t) denote the subgradient of 1(y(t)). Since function f ′(x,y) is concave with respect to y, we have\nf ′(x(t+1),y(t+1))− f ′(x(t+1),y(t)) ≤ 〈∇f(x(t+1),y(t)),y(t+1)−y(t)〉 − 〈ξ(t),y(t+1)−y(t)〉 (a) = 1\nβ ‖y(t+1)−y(t) ‖2 − 〈ξ(t) − ξ(t+1),y(t+1)−y(t)〉\n= 1\nβ ‖y(t+1)−y(t) ‖2 +\n〈 ∇̂f(x(t+1),y(t))− ∇̂f(x(t),y(t−1)),y(t+1)−y(t) 〉 − 1 β 〈 v(t+1),y(t+1)−y(t) 〉 (82)\nwhere in (a) we use ξ(t+1) = ∇̂f(x(t+1),y(t))− 1β (y (t+1)−y(t)). Then, we have\nEf(x(t+1),y(t+1))− Ef(x(t+1),y(t)) + 1 β\n〈 v(t+1),y(t+1)−y(t) 〉 ≤ 1 β ‖y(t+1)−y(t) ‖2 + 〈 ∇̂f(x(t+1),y(t))− ∇̂f(x(t),y(t−1)),y(t+1)−y(t) 〉 .\nApplying the steps from (73) to (80), we can have\nEf(x(t+1),y(t+1))− Ef(x(t+1),y(t))\n≤ 1 β E‖y(t+1)−y(t) ‖2 + 1 2β E‖y(t)−y(t−1) ‖2 −\n( γ − ( 3βL2y\n2 + β\n)) ‖y(t)−y(t−1) ‖2\n+ 3βL2x\n2 E‖x(t+1)−x(t) ‖2 + 7βσ2y + µ2d2L2y 4β\n(83)\nwhere we use\nE 〈 ∇̂yf(x(t+1),y(t))− ∇̂yf(x(t),y(t)),y(t+1)−y(t) 〉 ≤3βσ2y +\n3βL2x 2 E‖x(t+1)−x(t) ‖2 + 1 2β E‖y(t+1)−y(t) ‖2. (84)\nCombing (81), we have\nEf(x(t+1),y(t+1)) + ( 2\nβ2γ +\n1\n2β\n) E‖y(t+1)−y(t) ‖2 − ( 4\nβ −\n6L2y + 4\nγ\n) E‖y(t+1)−y(t) ‖2\n≤Ef(x(t+1),y(t)) + ( 2\nβ2γ +\n1\n2β\n) E‖y(t)−y(t−1) ‖2 − ( 4\nβ −\n6L2y + 4\nγ\n) E‖y(t)−y(t−1) ‖2\n−\n( 1\n2β −\n6L2y + 4\nγ\n) E‖y(t+1)−y(t) ‖2 + ( 6L2x γ2β + 3βL2x 2 ) E‖x(t+1)−x(t) ‖2.\n+ µ2d2L2y β\n( 1\n4 +\n1\nβγ\n) + ( 7β + 4\nβγ\n( 3\nγ + 7β\n)) σ2y. (85)" }, { "heading": "A.4.2 PROOF OF THEOREM 2", "text": "Proof: From (37), we know the “descent” of the minimization step, i.e., the changes from P ′(x(t),y(t),∆(t)y ) to P ′(x(t+1),y(t),∆(t)y ). Combining the “descent” of the maximization step by Lemma 4 shown in (85), we can obtain the following:\nP ′(x(t+1),y(t+1),∆(t+1)y )\n≤P ′(x(t),y(t),∆(t)y )−\n( 1\n2β −\n6L2y + 4\nγ ) ︸ ︷︷ ︸\na1\nE [ ‖y(t+1)−y(t) ‖2 ] (86)\n− ( 1 α − ( Lx 2 + 6L2x γ2β + 3βL2x 2 )) ︸ ︷︷ ︸\na2\nE [ ‖x(t+1)−x(t) ‖2 ]\n+ µ2 ( Lx +\nd2L2y β\n( 1\n4 +\n1\nβγ )) ︸ ︷︷ ︸\nb1\n+ασ2x +\n( 7β + 4\nβγ\n( 3\nγ + 4β )) ︸ ︷︷ ︸\nb2\nσ2y.\nWhen β, α satisfy the following conditions:\nβ < γ\n4(3L2y + 2) , and α < 1 Lx 2 + 6L2x γ2β + 3βL2x 2 , (87)\nwe can conclude that there exist b1, b2 > 0 such that\nP ′(x(t+1),y(t+1),∆(t+1)y ) ≤P ′(x(t),y(t),∆(t)y )− a1E [ ‖y(t+1)−y(t) ‖2 ] − a2 [ ‖x(t+1)−x(t) ‖2 ] + b1µ 2 + ασ2x + b2σ 2 y\n≤− ζ ′E [ ‖y(t+1)−y(t) ‖2 + ‖x(t+1)−x(t) ‖2 ] + b1µ 2 + ασ2x + b2σ 2 (88)\nwhere ζ ′ = min{a1, a2}.\nFrom (6), we can have\nE‖G(x(t),y(t))‖\n≤ 1 α E‖x(t+1)−x(t) ‖+ 1 α E‖x(t+1)−projX (x(t)−α∇xf(x(t),y(t)))‖\n+ 1 β E‖y(t+1)−y(t) ‖+ 1 β E‖y(t+1)−projY(y(t) +β∇yf(x(t),y(t))‖\n(a) ≤ 1 α E‖x(t+1)−x(t) ‖+ 1 β E‖y(t+1)−y(t) ‖\n+ 1\nα E‖projX (x(t+1)−α(∇̂xf(x(t),y(t)) +\n1 α (x(t+1)−x(t)))− projX (x(t)−α∇xf(x(t),y(t)))‖\n+ 1\nβ E‖projY(y(t+1) +β(∇̂yf(x(t+1),y(t))−\n1 β (y(t+1)−y(t)))− projY(y(t) +β∇yf(x(t),y(t)))‖\n(b) ≤ 3 α E‖x(t+1)−x(t) ‖+ E‖∇̂xf(x(t),y(t)))−∇xf(x(t),y(t)))‖\n+ 3\nβ E‖y(t+1)−y(t) ‖+ E‖∇̂yf(x(t+1),y(t))−∇yf(x(t),y(t))‖\n≤ 3 α E‖x(t+1)−x(t) ‖+ E‖∇̂xf(x(t),y(t)))−∇xfµ,y(x(t),y(t)))‖\n+ E‖∇xfµ,y(x(t),y(t)))−∇xf(x(t),y(t)))‖\n+ 3\nβ E‖y(t+1)−y(t) ‖+ E‖∇̂yf(x(t+1),y(t))−∇yfµ,y(x(t+1),y(t))‖\n+ E‖∇yfµ,y(x(t+1),y(t))−∇yfµ,y(x(t),y(t))‖ + E‖∇yfµ,y(x(t),y(t))−∇yf(x(t),y(t))‖\n(c) ≤ ( 3\nα + Lx\n) E‖x(t+1)−x(t) ‖+ 3\nβ E‖y(t+1)−y(t) ‖+ 2σ2y + µ2d2L2y\nwhere in (a) we use the optimality condition of x(t)-subproblem; in (b) we use nonexpansiveness of the projection operator; in (c) we apply the Lipschitz continuous of function f(x,y) under assumption A2.\nTherefore, we can know that E [ ‖G(x(t),y(t))‖2 ] ≤ c ( ‖x(t+1)−x(t) ‖2 + ‖y(t+1)−y(t) ‖2 ) + 2σ2y + µ 2d2L2y. (89)\nAfter applying the telescope sum on (88) and taking expectation over (89), we have\n1\nT T∑ t=1 E [ ‖G(x(t),y(t))‖2 ] ≤ c ζ ′ P1 − PT+1 T + cb1 ζ ′ µ2 + cασ2x ζ ′ + cb2 ζ ′ σ2y + 2σ 2 y + µ 2d2L2y. (90)\nRecall from A1 that f ≥ f∗ and Y is bounded with diameter R, therefore, Pt given by (11) yields\nPt ≥ f∗ +\n( min{4 + 4(3L2y + 2)β2 − 7βγ, 0}\nβ2γ\n) R2, ∀t. (91)\nAnd let (x(r),y(r)) be uniformly and randomly picked from {(x(t),y(t))}Tt=1, based on (91) and (90), we obtain\nEr [ E [ ‖G(x(r),y(r))‖2 ]] = 1\nT T∑ t=1 E [ ‖G(x(t),y(t))‖2 ] ≤ c ζ ′ P1 − f∗ − ν′R2 T + cb1 ζ ′ µ2 + cασ2x ζ ′ + cb2 ζ ′ σ2y + 2σ 2 y + µ 2d2L2y (92)\nwhere recall that ζ ′ = min{a1, a2}, c = max{Lx + 3α , 3 β }, and ν\n′ = min{4+4(3L2y+2)β 2−7βγ,0} β2γ .\nThe proof is now complete." }, { "heading": "B TOY EXAMPLE IN BOGUNOVIC ET AL. (2018): ZO-MIN-MAX VERSUS BO", "text": "We review the example in Bogunovic et al. (2018) as below,\nmaximize x∈C minimize ‖δ‖2≤0.5 f(x− δ) := −2(x1 − δ1)6 + 12.2(x1 − δ1)5 − 21.2(x1 − δ1)4\n−6.2(x1 − δ1) + 6.4(x1 − δ1)3 + 4.7(x1 − δ1)2 − (x2 − δ2)6 +11(x2 − δ2)5 − 43.3(x2 − δ2)4 + 10(x2 − δ2) + 74.8(x2 − δ2)3 −56.9(x2 − δ2)2 + 4.1(x1 − δ1)(x2 − δ2) + 0.1(x1 − δ1)2(x2 − δ2)2 −0.4(x2 − δ2)2(x1 − δ1)− 0.4(x1 − δ1)2(x2 − δ2),\n(93)\nwhere x ∈ R2, and C = {x1 ∈ (−0.95, 3.2), x2 ∈ (−0.45, 4.4)}. Problem (93) can be equivalently transformed to the min-max setting consistent with ours\nminimize x∈C maximize ‖δ‖2≤0.5 −f(x− δ). (94)\nThe optimality of solving problem (93) is measured by regret versus iteration t,\nRegret(t) = minimize ‖δ‖2≤0.5 f(x∗ − δ)−minimize ‖δ‖2≤0.5 f(x(t) − δ), (95)\nwhere minimize‖δ‖2≤0.5 f(x ∗ − δ) = −4.33 and x∗ = [−0.195, 0.284]T Bogunovic et al. (2018).\nIn Figure A1, we compare the convergence performance and computation time of ZO-Min-Max with the BO based approach STABLEOPT proposed in Bogunovic et al. (2018). Here we choose the same initial point for both ZO-Min-Max and STABLEOPT. And we set the same number of function queries per iteration for ZO-Min-Max (with q = 1) and STABLEOPT. We recall from (2) that the larger q is, the more queries ZO-Min-Max takes. In our experiments, we present the best achieved regret up to time t and report the average performance of each method over 5 random trials. As we can see, ZO-Min-Max is more stable, with lower regret and less running time. Besides, as q becomes larger, ZO-Min-Max has a faster convergence rate. We remark that BO is slow since learning the accurate GP model and solving the acquisition problem takes intensive computation cost.\n0 10 20 30 40 50 Number of iterations\n0\n20\n40\n60\n80\n100\n120\nRe gr\net\nZO-Min-Max q=1 ZO-Min-Max q=5 ZO-Min-Max q=10 STABLEOPT optimal\n0 10 20 30 40 50 Number of iterations\n100\n101\n102\n103\n104\nTo ta\nl t im e ZO-Min-Max q=1 ZO-Min-Max q=5 ZO-Min-Max q=10 STABLEOPT\n(a) (b)\nFigure A1: Comparison of ZO-Min-Max against STABLEOPT Bogunovic et al. (2018): a) Convergence performance; b) Computation time (seconds)." }, { "heading": "C EXPERIMENT SETUP ON POISONING ATTACK", "text": "In our experiment, we generate a synthetic dataset that contains n = 1000 samples (zi, ti). We randomly draw the feature vector zi ∈ R100 from N (0, I), and determine ti = 1 if 1/(1 + e−(z T i θ ∗+νi)) > 0.5. Here we choose θ∗ = 1 as the ground-truth model parameters, and νi ∈ N (0, 10−3) as random noise. We randomly split the generated dataset into the training dataset Dtr (70%) and the testing dataset Dte (30%). We specify our learning model as the\nlogistic regression model for binary classification. Thus, the loss function in problem (15) is chosen as Ftr(x,θ;Dtr) := h(x,θ;Dtr,1) + h(0,θ;Dtr,2), where Dtr = Dtr,1 ∪ Dtr,2, Dtr,1 represents the subset of the training dataset that will be poisoned, |Dtr,1|/|Dtr| denotes the poisoning ratio, h(x,θ;D) = −(1/|D|) ∑ (zi,ti)∈D[ti log(h(x,θ; zi)) + (1 − ti) log(1 − h(x,θ; zi))], and h(x,θ; zi) = 1/(1 + e −(ai+x)T θ). In problem (15), we also set = 2 and λ = 10−3. In Algorithm 1, unless specified otherwise we choose the the mini-batch size b = 100, the number of random direction vectors q = 5, the learning rate α = 0.02 and β = 0.05, and the total number of iterations T = 50000. We report the empirical results over 10 independent trials with random initialization." }, { "heading": "D ADDITIONAL EXPERIMENT RESULTS", "text": "In Figure A2, we show how the importance weights w of individual attack losses are learnt during ZOMin-Max (vs. FO-Min-Max). We can see that ZO-Min-Max takes into account different robustness levels of model-class pairs through the design of w.\n0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Number of iterations 1e4\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\nW ei\ngh ts\nZO-Min-Max, M1C1 ZO-Min-Max, M1C2 ZO-Min-Max, M2C1 ZO-Min-Max, M2C2\nFO-Min-Max, M1C1 FO-Min-Max, M1C2 FO-Min-Max, M2C1 FO-Min-Max, M2C2\nFigure A2: Convergence of importance weights learnt from ZO-Min-Max vs. FO-Min-Max.\nIn Figure A3, we contrast the success or failure (marked by blue or red in the plot) of attacking each image using the obtained universal perturbation x with the attacking difficulty (in terms of required iterations for successful adversarial example) of using per-image non-universal PGD attack (Madry et al., 2017b). We observe that the success rate of the ensemble universal attack is around 80% at each model-class pair, where the failed cases (red cross markers) also need a large amount of iterations to succeed at the case of per-image PGD attack. And images that are difficult to attack keep consistent across models; see dash lines to associate the same images between two models in Figure A3.\nM1C1 M2C1 M1C2 M2C2 Model-class pairs\n100\n101\n102\n103\n104\n105\nNu m\nbe r o\nf i te\nra tio\nns\nfailure in ensemble attack success in ensemble attack\nFigure A3: Success or failure of our ensemble attack versus successful per-image PGD attack.\nIn Figure A4, we show the testing accuracy of the poisoned model as the regularization parameter λ varies. We observe that the poisoned model accuracy could be improved as λ increases, e.g., λ = 1.\nHowever, this leads to a decrease in clean model accuracy (below 90% at λ = 1). This implies a robustness-accuracy tradeoff. If λ continues to increase, both the clean and poisoned accuracy will decrease dramatically as the training loss in (15) is less optimized.\n10 4 10 2 100 102 Lambda\n0.5\n0.6\n0.7\n0.8\n0.9\nTe st\nin g\nac cu\nra cy\nZO-Min-Max FO-Min-Max No Poison\nFigure A4: Empirical performance of ZO-Min-Max in design of poisoning attack: Testing accuracy versus regularization parameter λ." } ]
2,019
null
SP:41867edbd1bb96ff8340c8decefba2127a67dced
[ "The paper proposes a model building off of the generative query network model that takes in as input multiple images, builds a model of the 3D scene, and renders it. This can be trained end to end. The insight of the method is that one can factor the underlying representation into different objects. The system is trained on scenes rendered in mujoco.", "The paper presents a framework for 3D representation learning from images of 2D scenes. The proposed architecture, which the authors call ROOTS (Representation of Object-Oriented Three-dimension Scenes), is based on the CGQN (Consistent Generative Query Networks) network. The paper provides 2 modifications. The representation is 1. factorized to differentiate objects and background and 2. hierarchical to first have a view point invariant 3D representation and then a view-point dependent 2D representation. Qualitative and qualitative experiments are performed using the MuJoCo physics simulator [1] (please add citation in the paper). " ]
In this paper, we propose a probabilistic generative model, called ROOTS, for unsupervised learning of object-oriented 3D-scene representation and rendering. ROOTS bases on the Generative Query Network (GQN) framework. However, unlike GQN, ROOTS provides independent, modular, and object-oriented decomposition of the 3D scene representation. In ROOTS, the inferred object-oriented representation is 3D in the sense that it is 3D-viewpoint invariant as the scene-level representation of GQN is so. ROOTS also provides hierarchical object-oriented representation: at 3D global-scene level and at 2D local-image level. In experiments, we demonstrate on datasets of 3D rooms with multiple objects, the above properties by focusing on its abilities for disentanglement, compositionality, transferability, and generalization. ROOTS achieves this without performance degradation on generation quality in comparison to GQN.
[]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Christopher P Burgess", "Loic Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner. Monet" ], "title": "Unsupervised scene decomposition and representation", "venue": null, "year": 1901 }, { "authors": [ "Ricson Cheng", "Ziyan Wang", "Katerina Fragkiadaki" ], "title": "Geometry-aware recurrent neural networks for active visual recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christopher B Choy", "Danfei Xu", "JunYoung Gwak", "Kevin Chen", "Silvio Savarese" ], "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Eric Crawford", "Joelle Pineau" ], "title": "Spatially invariant unsupervised object detection with convolutional neural networks", "venue": "In Proceedings of AAAI,", "year": 2019 }, { "authors": [ "Yilun Du", "Zhijian Liu", "Hector Basevi", "Ales Leonardis", "Bill Freeman", "Josh Tenenbaum", "Jiajun Wu" ], "title": "Learning to exploit stability for 3d scene parsing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "SM Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Geoffrey E Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor", "David P Reichert", "Lars Buesing", "Theophane Weber", "Oriol Vinyals", "Dan Rosenbaum", "Neil Rabinowitz", "Helen King", "Chloe Hillier", "Matt Botvinick", "Daan Wierstra", "Koray Kavukcuoglu", "Demis Hassabis" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Marta Garnelo", "Murray Shanahan" ], "title": "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Will Grathwohl", "Dami Choi", "Yuhuai Wu", "Geoffrey Roeder", "David Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "arXiv preprint arXiv:1711.00123,", "year": 2017 }, { "authors": [ "Klaus Greff", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "Neural expectation maximization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Klaus Greff", "Raphaël Lopez Kaufmann", "Rishab Kabra", "Nick Watters", "Chris Burgess", "Daniel Zoran", "Loic Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-object representation learning with iterative variational inference", "venue": null, "year": 1903 }, { "authors": [ "Karol Gregor", "Frederic Besse", "Danilo Jimenez Rezende", "Ivo Danihelka", "Daan Wierstra" ], "title": "Towards conceptual compression", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Siyuan Huang", "Siyuan Qi", "Yinxue Xiao", "Yixin Zhu", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Cooperative holistic scene understanding: Unifying 3d object, layout, and camera pose estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman", "Koray Kavukcuoglu" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Ken Kansky", "Tom Silver", "David A Mély", "Mohamed Eldawy", "Miguel Lázaro-Gredilla", "Xinghua Lou", "Nimrod Dorfman", "Szymon Sidor", "Scott Phoenix", "Dileep George" ], "title": "Schema networks: Zeroshot transfer with a generative causal model of intuitive physics", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Abhishek Kar", "Christian Häne", "Jitendra Malik" ], "title": "Learning a multi-view stereo machine", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Hiroharu Kato", "Tatsuya Harada" ], "title": "Learning view priors for single-view 3d reconstruction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ananya Kumar", "SM Eslami", "Danilo J Rezende", "Marta Garnelo", "Fabio Viola", "Edward Lockhart", "Murray Shanahan" ], "title": "Consistent generative query networks", "venue": null, "year": 1807 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Joseph Marino", "Yisong Yue", "Stephan Mandt" ], "title": "Iterative amortized inference", "venue": "arXiv preprint arXiv:1807.09356,", "year": 2018 }, { "authors": [ "Thu Nguyen-Phuoc", "Chuan Li", "Lucas Theis", "Christian Richardt", "Yong-Liang Yang" ], "title": "Hologan: Unsupervised learning of 3d representations from natural images", "venue": null, "year": 1904 }, { "authors": [ "Pedro O Pinheiro", "Negar Rostamzadeh", "Sungjin Ahn" ], "title": "Domain-adaptive single-view 3d reconstruction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Daeyun Shin", "Zhile Ren", "Erik B Sudderth", "Charless C Fowlkes" ], "title": "3d scene reconstruction with multi-layer depth and epipolar transformers", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Vincent Sitzmann", "Justus Thies", "Felix Heide", "Matthias Nießner", "Gordon Wetzstein", "Michael Zollhofer" ], "title": "Deepvoxels: Learning persistent 3d feature embeddings", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "George Tucker", "Andriy Mnih", "Chris J Maddison", "John Lawson", "Jascha Sohl-Dickstein" ], "title": "Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Shubham Tulsiani", "Tinghui Zhou", "Alexei A Efros", "Jitendra Malik" ], "title": "Multi-view supervision for single-view reconstruction via differentiable ray consistency", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Shubham Tulsiani", "Saurabh Gupta", "David F Fouhey", "Alexei A Efros", "Jitendra Malik" ], "title": "Factoring shape, pose, and layout from the 2d image of a 3d scene", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yukako Yamane", "Eric T Carlson", "Katherine C Bowman", "Zhihong Wang", "Charles E Connor" ], "title": "A neural code for three-dimensional object shape in macaque inferotemporal cortex", "venue": "Nature neuroscience,", "year": 2008 }, { "authors": [ "Xinchen Yan", "Jimei Yang", "Ersin Yumer", "Yijie Guo", "Honglak Lee" ], "title": "Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Chong Yu", "Young Wang" ], "title": "3d-scene-gan: Three-dimensional scene reconstruction with generative adversarial networks", "venue": "In ICLR (Workshop),", "year": 2018 }, { "authors": [ "Tinghui Zhou", "Matthew Brown", "Noah Snavely", "David G Lowe" ], "title": "Unsupervised learning of depth and ego-motion from video", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The shortcomings of contemporary deep learning such as interpretability, sample efficiency, ability for reasoning and causal inference, transferability, and compositionality, are where the symbolic AI has traditionally shown its strengths (Garnelo & Shanahan, 2019). Thus, one of the grand challenges in machine learning has been to make deep learning embrace the benefits of symbolic representation so that symbolic entities can emerge from high-dimensional observations such as visual scenes.\nIn particular, for learning from visual observations of the physical world, such representation should consider the following criteria. First, it should focus on objects (and their relations) which are foundational entities constructing the physical world. These can be considered as units on which we can build a modular model. The modular nature also helps compositionality (Andreas et al., 2016) and transferability (Kansky et al., 2017). Second, being three-dimensional (3D) is a decisive property of the physical world. We humans, equipped with such 3D representation in our brain (Yamane et al., 2008), can retain consistency on the identity of an object even if it is observed from different viewpoints. Lastly, learning such representation should be unsupervised. Although there have been remarkable advances in supervised methods to object perception (Redmon et al., 2016; Ren et al., 2015; Long et al., 2015), the technology should advance toward unsupervised learning as we humans do. This not only avoids expensive labeling efforts but also allows adaptability and flexibility to the evolving goals of various downstream tasks because “objectness” itself can vary on the situation.\nIn this paper, we propose a probabilistic generative model that can learn, without supervision, objectoriented 3D representation of a 3D scene from its partial 2D observations. We call the proposed model ROOTS (Representation of Object-Oriented Three-dimensional Scenes). We base our model on the framework of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). However, unlike GQN which provides only a scene-level representation that encodes the whole 3D scene into a single continuous vector, the scene representation of ROOTS is decomposed into objectwise representations each of which is also an independent, modular, and 3D representation. Further, ROOTS learns to model a background representation separately for the non-object part of the scene. The object-oriented representation of ROOTS is more interpretable, composible, and transferable. Besides, ROOTS provides the two-level hierarchy of the object-oriented representation: one for a global 3D scene and another for local 2D images. This makes the model more interpretable and provides more useful structure for downstream tasks. In experiments, we show the above abilities of ROOTS on the 3D-Room dataset containing images of 3D rooms with several objects of different\ncolors and shapes. We also show that these new abilities are achieved without sacrificing generation quality compared to GQN.\nOur proposed problem and method are significantly different from existing works on visual 3D learning although some of those partly tackle some of our challenges. First, our model learns factorized object-oriented 3D representations which are independent and modular, from a scene containing multiple objects with occlusion and partial observability rather than a single object. Second, our method is unsupervised, not using any 3D structure annotation such as voxels, cloud points, or meshes as well as bounding boxes or segmentation annotation. Third, our model is a probabilistic generative model learning both representation and rendering with uncertainty modeling. Lastly, it is trained end-to-end. In Section 4, we provide more discussion on the related works.\nThe main contributions are: (i) We propose, in the GQN framework, a new problem of learning object-oriented 3D representations of a 3D scene containing multiple objects with occlusion and partial observability in the challenging setting described above. (ii) We achieve this by proposing a new probabilistic model and neural architecture. (iii) We demonstrate that our model enables various new abilities such as compositionality and transferability while not losing generation quality." }, { "heading": "2 PRELIMINARY: GENERATIVE QUERY NETWORKS", "text": "The generative query networks (GQN) (Eslami et al., 2018) is a probabilistic generative latentvariable model providing a framework to learn a 3D representation of a 3D scene. In this framework, an agent navigating a scene i collects K images xki from 2D viewpoint v k i . We refer this collection to context observations Ci = {(xki ,vki )}k=1,...,K . While GQN is trained on a set of scenes, in the following, we omit the scene index i for brevity and discuss a single scene without loss of generality. GQN learns scene representation z from context C. The learned representation z of GQN is a 3Dviewpoint invariant representation of the scene in the sense that, given an arbitrary query viewpoint vq , its corresponding 2D image xq can be generated from the representation.\nIn the GQN framework, there are two versions. The standard GQN model (Eslami et al., 2018) uses the query viewpoint to generate representation whereas the Consistent GQN (CGQN) (Kumar et al., 2018) uses the query after generating the scene representation in order to obtain queryindependent scene-level representation. Although we use CGQN as our base framework to obtain query-independent scene-level representation, in the rest of the paper we use the abbreviation GQN instead of CGQN to indicate the general GQN framework embracing both GQN and CGQN.\nThe generative process of GQN is written as follows: p(xq|vq, C) = ∫ p(xq|vq, z)p(z|C)dz. As shown, GQN uses a conditional prior p(z|C) to learn scene representation z from context. To do this, it first obtains a neural scene representation r from the representation network r = frepr-gqn(C) which combines the encodings of (vk,xk) ∈ C in an order-invariant way such as sum or mean. It then uses ConvDRAW (Gregor et al., 2016) to generate the scene latent variable z from scene representation r by p(z|C) = ∏ l=1:L p(z\nl|z<l, r) = ConvDRAW(r) with L autoregressive rollout steps. Due to intractability of the posterior distribution p(z|C,vq,xq), GQN uses variational inference for posterior approximation and the reparameterization trick (Kingma & Welling, 2013) for backpropagation through stochastic variables. The objective is to maximize the following evidence lower bound (ELBO) via gradient-based optimization.\nlog pθ(x q | vq, C) ≥ Eqφ(z|C,vq,xq) [log pθ(x q | vq, z)]−KL(qφ(z | C,vq,xq) ‖ pθ(z | C)).\nNote that although in this paper we use a single target observation D = (xq,vq) for brevity, the model is in general trained on a set of target observations D = {(xqj ,v q j )}j ." }, { "heading": "3 ROOTS: REPRESENTATION OF OBJECT-ORIENTED 3D SCENES", "text": "" }, { "heading": "3.1 GENERATIVE PROCESS", "text": "The main difference of our model from GQN is that we have a 3D representation per object present in the target 3D space while GQN has a single 3D representation compressing the whole 3D space into a vector without object-level decomposition. We begin this modeling by introducing the number of objects M in the target space as a random variable. Then, we can write the representation\nprior of ROOTS as p(z,M |C) = p(M |C)\n∏M\nm=1 p(z(m)|C). To implement such a model with a variable number of objects, in AIR (Eslami et al., 2016), the authors proposed to use an RNN that rolls out M steps, processing one object per step. However, according to our preliminary investigation (under review) and other works (Crawford & Pineau, 2019), it turned out that this approach is computationally inefficient and shows severe performance degradation with growing M .\nObject-Factorized Conditional Prior. To resolve this problem, in ROOTS we propose to process objects in a spatially local and parallel way instead of sequential processing. This is done by first introducing the scene-volume feature-map. The scene-volume feature-map is obtained by encoding contextC into a 3D tensor ofN = (H×W×L) cells. Each cell n ∈ {1, . . . , N} is then associated to D-dimensional volume feature rn ∈ RD. Thus, the actual output of the encoder is a 4-dimensional tensor r = frepr-scene(C). Each volume feature rn represents a local 3D space in the target 3D space in a similar way that a feature vector in a 2D feature-map of CNN models a local area of an input image. Note, however, that the introduction of the scene-volume feature-map is not the same as the feature-map of 2D images because, unlike the CNN feature-map for images, the actual 3D target space is not directly observable—it is only observed through a proxy of 2D images. For the detail implementation of the encoder frepr-scene, refer to the Appendix A.3.\nGiven the scene-volume feature-map, for each volume cell n = 1, . . . , N but in parallel, we obtain three latent variables (zpresn , z pos n , z what n ) = zn from the 3D-object prior model p(zn|rn). We refer this collection of object latent variables z = {zn}Nn=1 to the scene-object latent-map as z is generated from the scene-volume feature-map r. Here, zpresn is a Bernoulli random variable indicating whether an object is associated (present) to the volume cell or not, zposn is a 3-dimensional coordinate indicating the position of an object in the target 3D space, and zwhatn is a representation vector for the appearance of the object. We defer a more detail description of the 3D-object prior model p(zn|rn) to the next section. Note that in ROOTS we obtain zwhatn as a 3D representation which is invariant to 3D viewpoints. The position and appearance latents for cell n are defined only when the cell has an associated object to represent, i.e., zpresn = 1. From this modeling using scenevolume feature-map, we can obtain M = ∑ n z pres n and the previous prior model can be written as\nfollows: p(z,M | C) = p(M | C) ∏M m=1 p(z(m) | C) =\nN∏ n=1 p(zn | rn) = N∏ n=1 p(zpresn |rn) [ p(zposn | rn)p(zwhatn | rn, zposn ) ]zpresn . (1) In addition to allowing spatially parallel and local object processing, another key idea behind introducing a presence variable per volume cell is to reflect the inductive bias of physics: two objects cannot co-exist at the same position. This helps remove the sequential object processing because dealing with an object does not need to consider other objects if their features are from spatially\ndistant areas. Note also that the scene-volume feature-map is not to strictly partition the target 3D space and that the presence variable represents the existence of the center position of an object not the full volume of an object. Thus, information about an object can exist across neighboring cells.\nHierarchical Object-Oriented Representation. The object-oriented representations z = {zn} provided by the above prior model is global in the sense that it contains all objects in the whole target 3D space, independently to a query viewpoint. From this global representation and given a query viewpoint, ROOTS generates a 2D image corresponding to a query viewpoint. This is done first by learning the view-dependent representation of the target image. In a naive approach, this may be done simply by learning a single vector representation p(zq|z,vq) but in this case, we lose important information: the correspondence between a rendered object in the image and a global object representation zn. That is, we cannot track from which object representation zn an object in the image is rendered. In ROOTS, we resolve this problem by introducing local 2D-level objectoriented representation layer. This local object-oriented and view-dependent representation allows additional useful structure and more interpretability. This 2D local representation is similar to those in AIR (Eslami et al., 2016) and SPAIR (Crawford & Pineau, 2019).\nSpecifically, for n for which zpresn = 1, a local object representation sn is generated by conditioning on the global representation set z and the query vq . Our local object representation model is written as: p(s|z,vq) = ∏N n=1 p(sn|z,vq). Similar to the decomposition of zn, local object representation sn consists of (spresn , s pos n , s scale n , s what n ). Here, s pres n indicates whether an object n should be rendered in the target image from the perspective of the query. Thus, even if an object exists in the target 3D space, i.e., zpresn = 1, s pres n can be set to zero if that object should be invisible from the query viewpoint. Similarly, sposn and s scale n represent respectively the position and scale of object n in the image not in the 3D space, and swhatn represents the appearance to be rendered into the image (thus not 3D invariant). For more details about how to obtain (spresn , s pos n , s scale n , s what n ) from z and vq , we describe in the next section. Given s = {sn}, we then render to the canvas to obtain the target image p(xq|s). Combining all, the generative process of ROOTS is written as follows:\np(xq|vq, C) = ∫ p(xq|s) N∏ n=1 p(sn|z,vq) N∏ n=1 p(zn|C) dzds. (2)\nSee Figure 1 for the overview of the generation process." }, { "heading": "3.2 IMPLEMENTATION DETAILS", "text": "Global 3D-object prior. The 3D-object prior p(zn|rn) generates three latents (zpresn , zposn , zwhatn ) as follows. It first obtains the presence latent from zpresn ∼ Bernoulli(fpres(rn)) and the 3-dimension position latent from zposn ∼ N (fµpos(rn), fσpos(rn)). Using these two latents, we then obtain the appearance latent zwhatn . This process is divided into object gathering and object encoding.\nFor object gathering, for each context image xk we attend and crop a patch that corresponds to object n. Specifically, we first notice that using the deterministic camera-coordinate projection function f3D→2D (which we do not learn), we can project from the perspective of a context viewpoint vk, a 3D position zposn in the global 3D coordinate system into a 2D position u pos k,n in context image x\nk, i.e., uposk,n = f3D→2D(z pos n ,v\nk). If object n should be invisible from the viewpoint vk, its projected 2D position uposk,n is out of the image x\nk and we do not crop a patch. For more details about the projection function f3D→2D, refer to Wikipedia (2019). We also predict the bounding box scale uscalek,n = fscale(u pos k,n, rn,v\nk). Given the center position and scale, we can crop a patch xkn ⊂ xk using the spatial transformer (Jaderberg et al., 2015). Applying this cropping to all context pairs (vk,xk) ∈ C, we gather a set of object image-patches Xn = {xkn}Kk=1 for all n with zpresn = 1.\nGiven object image-patches Xn, obtaining object 3D-representation zwhatn can be converted to a GQN encoding problem. That is, we can simply consider Xn and its corresponding viewpoints as a new object-level context Cn, and can run an order-invariant GQN representation network rwhatn = frepr-obj(Cn) and then run ConvDRAW to obtain zwhatn from r what n .\nLocal 2D-object prior. The intermediate prior p(sn|z,vq) generates (spresn , sposn , sscalen , swhatn ). This is done as follows. Similar to what we described in the above to obtain uposk,n, we use the coordinate projection function f3D→2D to find the position of a global object zn in the\ntarget image from the perspective of query vq , i.e., sposq,n = f3D→2D(z pos n ,v q). Thus, we model the position as a deterministic variable. The scale sscaleq,n is also obtained similarly as described in the above for uscalek,n , but with random sampling. Then, we predict s pres n ∼ Bern(fpres(sposq,n , s scale q,n , zn,v\nq)). If object n should be visible in the 2D target image, i.e., spresn = 1, we generate the 2D appearance representation swhatn by using an object-level GQN decoder based on ConvDRAW, i.e., swhatn = ConvDRAW(z what n ,v\nq). Finally, we have p(sn|z,vq) = p(sposn |zposn ,vq)p(sscalen |sposn , zwhatn ,vq)p(spresn |sposn , sscalen , zn,vq). Rendering to 2D Canvas. A main challenge in rendering the local representation s = {sn} into the image canvas, i.e., p(xq|s), is to deal with occlusion. In ROOTS, this can easily be achieved by noticing (i) that the coordinate conversion f3D→2D(zposn ,v\nq) actually converts a 3D-coordinate to another 3D-coordinate and (ii) that then the last dimension of the converted coordinate system can be interpreted as the orthogonal projection distance from the viewpoint vq to the object’s position zposn . We can use this distance as object depth from the observer’s perspective. This allows us to sort objects according to their depths and render each object accordingly without occlusion. To handle background, ROOTS has an independent module to infer the background separately at imagelevel. We also found that learning an object-level mask along with the appearance latent swhatn helps segmentation between foreground and background as well as generating less blurry images. More details on related implementation are provided in Appendix A.4." }, { "heading": "3.3 LEARNING AND INFERENCE", "text": "Due to the intractability of the posterior p(z, s|C,D) with D = {(vqj ,x q j)}j the target viewpointimage pairs, we train ROOTS using variational inference with the following posterior approximation qφ(z, s|C,D) = qφ(z|C,D)qφ(s|z, C,D). To compute the gradient w.r.t. the continuous latent variables such as the position and appearance, we use reparameterization trick and for the discrete variables on the presence, we use a continuous relaxation using Gumbel-Softmax trick (Jang et al., 2016). Other methods based on the REINFORCE algorithm (Williams, 1992; Tucker et al., 2017; Grathwohl et al., 2017) can also be used. Implementation of the approximate posterior q(z|C,D) and q(s|z, C,D) is made easier by sharing the parameters of q with that of conditional prior p(z|C): we only need to provide additional data D. With D = (vq,xq) for simplicity, the objective is to maximize the following evidence lower bound (ELBO): L(θ, φ;C,D) =\nEs,z∼qφ [log pθ(xq|s)−KL[qφ(s|z,vq, C,D) ‖ pθ(s|z,vq)]]−KL[qφ(z|C,D) ‖ pθ(z|C)]. (3)\nCombining with Unconditioned Prior. One difficulty in using the conditional prior is the fact that, because the prior is learned, we have less control in reflecting our prior knowledge into the prior distribution. In our experiments, it turns out that biasing the posteriors of some variables towards the values of our prior preference is helpful in stabilizing the model. To implement this, in our training, we use the following objective that has additional KL terms between the posterior and unconditioned prior.\nL = L(θ, φ;C,D) +KL[qφ(zpos, sscale|C,D) ‖ N (0, I)] + γKL[qφ(zpres|C,D) ‖ Geom(ρ)] + γEz∼qφ [KL[qφ(spres|zpres, C,D) ‖ Geom(ρ)]] (4)\nwhere zpres = {zpresn }n and zpos, zscale, spres are defined similarly. The ρ and γ are hyperparameters weighting the auxiliary loss terms. We set them to 0.999 and 7 during training, respectively. This auxiliary loss can be derived if we replace the conditional prior by the product of expert prior p(z|C)p(z) divided by the posterior q(z|C)." }, { "heading": "4 RELATED WORKS", "text": "Although to our knowledge there has been no previous work on unsupervised and probabilistic generative object-oriented representation learning for 3D scenes containing multiple objects, there has been literature on its 2D problems. The first is the Attend, Infer, Repeat (AIR) model (Eslami et al., 2016). AIR uses spatial transformer (Jaderberg et al., 2015) to crop an object patch and uses an RNN to sequentially generate next object conditioning on the previous objects. In Crawford & Pineau (2019), the authors showed that this RNN-based rollout is inefficient and can significantly\ndegrade performance as the number of objects increases. SPAIR is inspired by YOLO (Redmon et al., 2016), but unlike YOLO, it does not require bounding box labels. The main limitation of SPAIR is to infer the latents sequentially. Neural Expectation Maximization (NEM) (Greff et al., 2017) considers the observed image as a pixel-level mixture of K objects and an image per object is generated and combined according to the mixture probability. In Greff et al. (2019), the authors proposed a more efficient version of NEM, called IODINE, using iterative inference (Marino et al., 2018). In MONET (Burgess et al., 2019), an RNN drawing a scene component at each time step is used. Unlike AIR and SPAIR, the representations in NEM, IODINE, and MONET do not explicitly provide natural disentanglement like presence and pose per object.\nThere have been plenty amount of remarkable works about visual 3D learning from the computer vision community. However, as pointed in Section 1, the problem setting and proposed model of these works are different from ours in the sense that they are either (i) not decomposing object-wise representations from a scene containing multiple objects (but working mostly on single object cases) (Wu et al., 2016; Yan et al., 2016; Choy et al., 2016; Kar et al., 2017; Nguyen-Phuoc et al., 2019), (ii) supervised approach (Huang et al., 2018; Tulsiani et al., 2018; Cheng et al., 2018; Shin et al., 2019; Du et al., 2018), (iii) learn image generation (synthesis) of a 3D scene without scene-representation (Sitzmann et al., 2019; Kato & Harada, 2019; Pinheiro et al., 2019; Tulsiani et al., 2017) or learn scene-representation without learning rendering ability (Zhou et al., 2017; Yu & Wang, 2018), or (iv) not end-to-end. Many of the more traditional works from 3D computer vision also relevant to our work but many of them are not based on neural networks and not end-to-end trainable." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate ROOTS quantitatively and qualitatively. We train ROOTS with the same hyperparameters on all datasets. We first briefly describe the datasets. For more details of the dataset generation and network architecture, refer to Appendix A.5 and Appendix A.4. We use MuJoCo, (Todorov et al., 2012), to simulate 3D scenes. Specifically, we generate three different datasets of scenes with 1-3 objects, 2-4 objects and 3-5 objects, respectively. We set the image size to 64×64×3 pixels. For each object, we randomly choose its position, shape, size, and color. Although we put all objects on the floor, we still predict the 3-dimension coordinate values for zposn because objects have different sizes. We generate 60K different scenes for each dataset and split them into 50k for training, 5k for validation, and 5k for testing. We use CGQN as the baseline implementation but in the rest of the section use the abbreviation ‘GQN’ instead of ‘CGQN’ to indicate the general GQN framework." }, { "heading": "5.1 QUALITATIVE EVALUATION", "text": "First, we compare the generations between ROOTS with GQN by visualizing several generated samples from the same scene under the same set of query viewpoints. Note that the goal of this experiment is to show that achieving the object-wise representation in ROOTS does not deteriorate its generation quality in comparison to GQN. As seen in Figure 2, ROOTS generates slightly sharper object boundaries while GQN generates more blurry images. We believe that this is due to the object-wise generation using segmentation masks. Then, to provide a further understanding of the advantages of object-orientated representation, we visualize decomposed generations from ROOTS, as shown in Figure 3.A. We can see that ROOTS can generate clean background, complete objects, precise foreground, and detailed occlusion mask separately. Furthermore, to demonstrate the viewpoint-invariant property of the global object representation zwhatn , in Figure 3.B, we provide generations of two objects under different query viewpoints. We can see that ROOTS recovers the object with pose corresponding to query viewpoints. GQN cannot provide such decomposed generations because its representation is scene-level where objects are not decomposed.\nObject-wise disentanglement. Generations from random viewpoints after changing object positions. In this section and the following sections, we use ROOTS trained on the 2-4 object dataset for demonstration unless otherwise stated. To verify the disentanglement property of our objectoriented representation learned by ROOTS, we carry out the experiment of arbitrarily modifying zposn of one object in a scene. As a good disentangled latent representation, this modification should not affect the position, existence, and appearance of other objects concurrent in the scene while handling occlusion properly. More importantly, we should be able to change either the x or y coordinate independently and this change should be consistent across viewpoints. To demonstrate this,\nafter changing one dimension of zposn of the red cylinder, as shown in Figure 4, we feed this modified ẑposn back to the model and follow the ROOTS generation model in the same way as we do during testing. Generations under 4 different viewpoints are shown in Figure 4. We can see that ROOTS has a strong ability to learn disentangled representation and occlusions have been handled well due to this advantage.\nCompositionality. As stated in earlier sections that a 3D scene can be decomposed into several independent objects, another advantage coming with object-orientated representations is that a new scene can be easily built up with selected object components. Thus, by simple combination, we can build novel scenes. To demonstrate this, we first provide ROOTS with three sets of context images from three different scenes and save the learned object representations z for each scene, z = {zposn , zwhatn } for all n with zposn = 1. Then we swap one object between the first two scenes and add one additional object to the third scene, as shown in Figure 5. We see that the object component learned by ROOTS is fully disentangled and can be reused to create new scenes. Also, by adding one new object, we make a scene with 5 objects, which does not exist in the training dataset. Another example shown a scene 9 objects is visualized in Figure 6, where we use ROOTS trained on the 1-3 objects dataset. More details about this experiment can be found in Appendix A.2.\nPartial Observability. We know that the successful generation of ROOTS, even with only partial observations provided, is coming from object gathering across viewpoints. This is because if an object is invisible for one viewpoint, ROOTS can learn it from other viewpoints. Here, we want\nto push the partial observability to the extreme case, where one object is totally invisible for all context viewpoints. In this case, ROOTS should not be able to correctly predict its existence, just like humans observing the true physical world. To show this, we manually select some images from a scene that has one object missing served as context and the rest served as targets for ROOTS to generate. We show the results in Figure 7." }, { "heading": "5.2 QUANTITATIVE EVALUATION", "text": "NLL and MSE. In this section, we compare the quantitative results of ROOTS and GQN on the negative log-likelihood (NLL) and the mean squared error (MSE) on the test dataset. The goal is to show quantitatively that achieving the object-wise representation in ROOTS does not deteriorate its generation quality in comparison to GQN. We approximate NLL using importance sampling with K = 50 samples and report the image NLL normalized by the number of pixels in Table ??. Both GQN and ROOTS are trained for 120 epochs on each dataset, making sure both ROOTS and GQN have converged. We see that, although learning object-factorized representations, ROOTS achieves NLL comparable to GQN. Due to object-wise representation, ROOTS recovers objects separately, together with an independent background module, ROOTS performs better on MSE than GQN. This benefit becomes clearer as the number of objects in the scene grows.\nObject Detection Quality. In this section, we only provide precision and recall results as object detection evaluation of ROOTS as GQN cannot detect objects. To estimate the true positive prediction, we first filter out predictions that are too far away from any ground truth by applying a\nradius threshold. The distance is calculated as the euclidean distance between two center points. Then we assign the predicted object to the ground truth object to which it has the nearest distance. Multi-association is not allowed. We normalize the coordinate value to be in the range of [−1, 1] for better interpretability of the applied radius threshold. Accordingly, the average object size is 0.3. We provide the precision and recall result under different thresholds in Table ??. Besides precision and recall, we also provide the counting accuracy of ROOTS." }, { "heading": "6 CONCLUSION", "text": "We proposed ROOTS, a probabilistic generative model for unsupervised learning of 3D scene representation and rendering. ROOTS can learn object-oriented interpretable and hierarchical 3D scene representation. In experiments, we showed the generation, decomposition, and detection ability of ROOTS. For compositionality and transferability, we also showed that, due to the factorization of structured representation, new scenes can be easily built up by reusing components from 3D scenes. Interesting future directions would be to learn the knowledge of the 3D world in a sequential manner as we humans keep updating and inferring knowledge through time." }, { "heading": "A APPENDIX", "text": "A.1 GENERALIZATION\nIn this section, we quantitatively analyze ROOTS’s generalization performance on tasks of generation and detection. For the task of generation, we provide a comparison between ROOTS and GQN. Specifically, we train both ROOTS and GQN on our three datasets until full convergence. For each model trained on one dataset, e.g 1-3 objects dataset, we test it on the other two datasets, e.g 2-4 objects dataset, and 3-5 objects dataset. For the task of detection, since GQN only provide scenelevel representation (detection is not possible), we only report detection performance of ROOTS. Generation results (MSE) and detection results (precisin and recall) are shown in Table 3 and Table 4, respectively. Precision and recall are reported with the radius threshold set to 0.15 unit in Mujoco physic world. Here, we describe the trend we find from the experimental observations. When trained on dataset with small (e.g., 1-3) number of objects and tested on large (e.g., 3-5) number of objects, both ROOTS and GQN do not generalize well. This is because, during testing, latent variables are sampled from a learned prior. If during training, the model isn’t provided a chance to see scenes with more objects, it is hard to generalize well. Also, we note that, compared with GQN, ROOTS obtains larger MSE error. Together with observations in the following case, we hypothesize this sensitivity comes from the discrete variable, presenting the existence of an object. While with one vector representing the full scene, GQN has smoother performance. When trained on dataset with a large number of objects while tested on a small number of objects, ROOTS and GQN show a similar property in their performance. They both perform well. But, in this case, ROOTS produces better generations results, which we believe is a benefit from the object-wise generation and a separate background module. While in GQN all information are compressed into one vector. The above conclusion of ROOTS is consistent with its performance on detection task. One interesting finding during this experiment is that ROOTS trained on 3-5 objects dataset achieves better results (lower MSE, higher precision, and recall) than trained on 1-3 objects dataset when testing on 1-3 objects dataset. A similar phenomenon is found when testing on 2-4 objects dataset. This is an interesting finding that is worth more investigation.\nA.2 COMPOSING WITH MORE OBJECTS\nIn this section, we provide one more example of compositionality. We first train ROOTS on 1-3 objects dataset. As in Figure 8, during testing, we collect object representations from 7 different scenes and reuse them to composite a new scene with 9 objects. Additionally, the sampled center position of each object and query viewpoints (represented as camera position, pitch, and roll) fed into ROOTS are with respect to canonical coordinates. Note that, to predict the scale of objects in 2D projection correctly (e.g., for the same object, the larger the distance between it and viewpoint is, the smaller that object should be in the 2D projection), ROOTS learns to infer local depth (position translation) for each object given specific query viewpoint. This can be observed on the yellow sphere, green cylinder and blue cube in the bottom row in Figure 8.\nA.3 DETAILS OF COMPONENT MODULES\nWe first introduce some important building-blocks for implementing our model in this section, and then sketch the implementation steps using modules described here in the following section.\nScene Representation Network: We use Scene Representation Network to implement the order invariant encoder at scene level, frepr-scene(·). This network is modified based on Representation Network in GQN. We adjust kernel size and add a CNN3D layer to make sure the output conv-features fit our needs, shown as in Figure 9. The scene representation network takes <image, viewpoint> pair as input, output a conv-feature map. To implement the order invariant, we take the mean of these outputs from the same scene.\nObject Representation Network: We use object Representation Network to implement order invariant encoder at object level, frepr-obj(·). As visualized in Figure 10, we design a branch to pass low level feature to provide richer conv-features. The dimension d in the second input equals to the summation of the dimension of {zpres, sscale, and spos}. The corresponding values are listed in Table 5. The order invanriant is implemented in the same way as in Scene Representation Network.\nConvolutional LSTM Cell: We use ConvLSTM as the RNN module in the ConvDRAW module. In ConvLSTM, all fully-connected layers are substituted with convolutional layers. Its one updating step is described as follows:\n(hi+1, ci+1)←− ConvLSTM(xi, hi, ci)\nwhere hi is the output of the cell and ci is the recurrent state of the ConvLSTM, xi is the input. Both hi and ci are initialized with zeros at the i = 0 step.\nConvDRAW: We highlight one step ConvDRAW (denoted as l) used in ROOTS here for generative process and inference process separately.\n• Generative Process:\n(h(l+1)p , c (l+1) p )← ConvLSTMθ(x(l), z(l),h(l)p , c(l)p )\nz(l+1) ∼ StatisNet(h(l+1)p )\n• Inference Process:\n(h(l+1)q , c (l+1) q )← ConvLSTMφ(y,x(l),h(l)p ,h(l)q , c(l)q )\nz(l+1) ∼ StatisNet(h(l+1)q )\n(h(l+1)p , c (l+1) p )← ConvLSTMθ(e(l), z(l),h(l)p , c(l)p )\nwhere xl is the input at the lth step, y is target reconstruction, z(l+1) is the sampled latent at the lth step. We denote the prior module and posterior module with subscript p and q, respectively. θ and φ are neural network parameters. The StatisNet is described in the following paragraph. In the Renderer network, we replace the StatisNet with a deterministic Decoder.\nSufficient Statistic Networks: The Sufficient Statistic Networks will output sufficient statistics for pre-defined distribution, e.g. µ and σ for Gaussian distribution, given inputs. We list the configuration of all the Sufficient Statistics Networks in Table 5 used during generation. For zposn , z what n and sscalen,i , we use auto-regressive scheme (ConvDraw) to learn the sufficient statistics. For z pres n and spresq,n , we use regular convolutional layers. In the third column, we give the kernel size of the first convolutional layer, the remaining are Conv3D 1X1 layers. If the kernel size is 3, we have one zero paddings, the stride is 1 to keep the spatial size unchanged. The ”Draw Rollouts” column shows the number of DRAW steps we apply. The ”Concat” column specifies how we use sampled latent value for the subsequent process, for example, concatenating zl, for l < L or only taking zL, where L is the rollout steps.\nGQN Implementation: We strictly follow the paper for implementation details. The only difference is we enhance the decoder in the renderer by adding two more convolutional layers.\nA.4 IMPLEMENTATION DETAILS\nHere, we give the details of the implementation of ROOTS. Details of component modules are mentioned in Appendix A.3. We first outline the generation and inference process for one object, indexed with n. Parallelizing it to multiple objects is straightforward.\nGeneration Process Superscript on ConvDRAW is used to indicate which variable the ConvDRAW module is responsible for. The Renderer module is implemented in the same way as ConvDRAW with one difference that we do not model any variable in Renderer, making it a deterministic decoder. All the ConvDRAW modules have a hidden state size of 128. We use ST denote spatial transformation process and ST −1 denotes the reversed process. K denote the index set of context C and θ and φ are neural network parameters.\nr← ∑ k∈K frepr-scene(x k,vk, θ) (Obtain scene-volume feature-map) (5) zposn ∼ ConvDRAW pos θ (rn) (Sample 3D position) (6) zpresn ∼ CONVθ(rn) (Sample global presence) (7) uposk,n ← f3D→2D(z pos n ,v k), k ∈ K (Perspective projection) (8)\nuscalek,n ∼ ConvDRAWscaleθ (rn,vk,u pos k,n), k ∈ K (Sample object scale) (9)\nxkn ← ST (xk, [u pos k,n,u scale k,n ]), k ∈ K (Crop object patch) (10) rwhatn ← ∑ k∈K frepr-obj(x k n,vk), k ∈ K (Object level encoding) (11)\nzwhatn ∼ ConvDRAWwhatθ (rwhatn ) (Sample global what) (12) sposq,n ← f3D→2D(zposn ,vq) (Perspective projection) (13)\nsscaleq,n ∼ ConvDRAWscaleθ (rn, sposq,n ,vq) (Sample object scale) (14) spresq,n ∼ CONVθ(zpresn , rwhatn ,vq, sposq,n , sscaleq,n ) (Sample local presence) (15)\nx̂qn, α q n ← Rendererθ(zwhatn ,vq) (Decode the object patch and object mask)\n(16)\nx̂qn ← ST −1(αqn × x̂qn, [sposq,n , sscaleq,n ]) (Generate object patch) (17) αqn ← ST −1(αqn, [s q n pos, sqn\nscale]) (18) αoccq,n ← depthn × αqn == min\nn (depthn × αqn) (Obtain occlusion mask) (19) x̂qfg ← ∑ n (αoccq,n × x̂qn × spresq,n ) (Generate foreground) (20)\nαqfg ← ∑ n (αqn × αoccq,n × spresq,n ) (Generate foreground mask) (21)\nzbg ∼ ConvDRAWbgθ (r) (Sample background) (22) x̂qbg ← BgRenderer(z bg,vq) (Decode background) (23)\nx̂q ← x̂qfg + (1− α q fg)× x̂ q bg (Render generations) (24)\nInference Process The inference modules are paralleled with generative modules. We only highlight the different part below.\nrq ← ∑ q frepr-scene(x q,vq, θ) (Obtain scene-volume feature-map) (25)\nzposn ∼ ConvDRAW pos θ,φ(r q n, rn) (Sample 3D position for object n) (26) zpresn ∼ CONVφ(rqn, rn) (Sample global presence) (27) sposq,n ← f3D→2D(zposn ,vq) (Perspective projection) (28)\nsscaleq,n ∼ ConvDRAWscaleθ,φ (rn, rqn, sposq,n ,vq) (Sample object scale) (29) xqn ← ST (xq, [sposq,n , sscaleq,n ]) (Crop object patch) (30)\nrwhatq,n ← ∑ q frepr-obj(x q n,v q) (Encode object level context) (31)\nzwhatn ∼ ConvDRAWwhatθ,φ (rwhatn , rwhatq,n ) (Sample global what) (32) spresq,n ∼ CONVφ(rwhatq,n , rwhatn , sposq,n , sscaleq,n ,vq) (Sample local presence) (33)\nzbg ∼ ConvDRAWbgθ,φ(r q, r) (Sample background) (34)\nA.5 DATASET DETAILS\nThe sizes of objects are randomly chosen from 0.56 to 0.66 unit in the Mujoco physic world. Each object is put on the floor of 3D space with a range of [−2, 2] along both x-axis and y-axis. We have three different types of object, cube, sphere, and cylinder with 6 different colors. For each dataset, we first randomly choose the number of objects in a scene, then randomly choose object type, color and their positions (x and y coordinates). For each scene, we have 30 cameras put at a radius of 3, pointing at a squared area located at the center, thus, the camera would not always look at the center point. The pitch is randomly chosen from [−π/6.,−π/7.] and the yaw is randomly chosen from [−π, π]. During training, we randomly divide each scene into context set and query set, where the number of <image, viewpoint> pair in context set is randomly chosen from a range of [10, 20] and the remaining are served as query set." } ]
2,019
null
SP:05a329e1e9faa9917c278dd2ba1eb5090189bdf9
[ "This paper presents a method for single image 3D reconstruction. It is inspired by implicit shape models, like presented in Park et al. and Mescheder et al., that given a latent code project 3D positions to signed distance, or occupancy values, respectively. However, instead of a latent vector, the proposed method directly outputs the network parameters of a second (mapping) network that displaces 3D points from a given canonical object, i.e., a unit sphere. As the second network maps 3D points to 3D points it is composable, which can be used to interpolate between different shapes. Evaluations are conducted on the standard ShapeNet dataset and the yields results close to the state-of-the-art, but using significantly less parameters.", "This work is focused on learning 3D object representations (decoders) that can be computed more efficiently than existing methods. The computational inefficiency of these methods is that you learn a (big) fixed decoder for all objects (all z latents), and then need to apply it individually on either each point cloud point you want to produce, or each voxel in the output (this problem exists for both the class of methods that deform a uniform distribution R^3 -> R^3 a la FoldingNet, or directly predict the 3D function R^3 -> R e.g. DeepSDF). The authors propose that the encoder directly predict the weights and biases of a decoder network that, since it is specific to the particular object being reconstructed, can be much smaller and thus much cheaper to compute." ]
We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘mapping’ network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects. ‡
[ { "affiliations": [], "name": "Eric Mitchell" }, { "affiliations": [], "name": "Selim Engin" }, { "affiliations": [], "name": "Volkan Isler" }, { "affiliations": [], "name": "Daniel D Lee" } ]
[ { "authors": [ "Angel X. Chang", "Thomas A. Funkhouser", "Leonidas J. Guibas", "Pat Hanrahan", "Qi-Xing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su", "Jianxiong Xiao", "Li Yi", "Fisher Yu" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "CoRR, abs/1512.03012,", "year": 2015 }, { "authors": [ "Christopher B Choy", "Danfei Xu", "JunYoung Gwak", "Kevin Chen", "Silvio Savarese" ], "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Christian Häne", "Shubham Tulsiani", "Jitendra Malik" ], "title": "Hierarchical surface prediction for 3d object reconstruction", "venue": "In 2017 International Conference on 3D Vision (3DV),", "year": 2017 }, { "authors": [ "Maxim Tatarchenko", "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Gernot Riegler", "Ali Osman Ulusoy", "Andreas Geiger" ], "title": "Octnet: Learning deep 3d representations at high resolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Nanyang Wang", "Yinda Zhang", "Zhuwen Li", "Yanwei Fu", "Wei Liu", "Yu-Gang Jiang" ], "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Georgia Gkioxari", "Jitendra Malik", "Justin Johnson" ], "title": "Mesh r-cnn", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Edward Smith", "Scott Fujimoto", "Adriana Romero", "David Meger" ], "title": "GEOMetrics: Exploiting geometric structure for graph-encoded objects", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Rana Hanocka", "Amir Hertz", "Noa Fish", "Raja Giryes", "Shachar Fleishman", "Daniel Cohen-Or" ], "title": "Meshcnn: A network with an edge", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Haoqiang Fan", "Hao Su", "Leonidas J Guibas" ], "title": "A point set generation network for 3d object reconstruction from a single image", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yaoqing Yang", "Chen Feng", "Yiru Shen", "Dong Tian" ], "title": "Foldingnet: Point cloud auto-encoder via deep grid deformation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard Newcombe", "Steven Lovegrove" ], "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Panos Achlioptas", "Olga Diamanti", "Ioannis Mitliagkas", "Leonidas Guibas" ], "title": "Learning representations and generative models for 3d point clouds", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chen-Hsuan Lin", "Chen Kong", "Simon Lucey" ], "title": "Learning efficient point cloud generation for dense 3d object reconstruction", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Xinchen Yan", "Jimei Yang", "Ersin Yumer", "Yijie Guo", "Honglak Lee" ], "title": "Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Mateusz Michalkiewicz", "Jhony K. Pontes", "Dominic Jack", "Mahsa Baktashmotlagh", "Anders Eriksson" ], "title": "Deep level sets: Implicit surface representations for 3d shape inference", "venue": "CoRR, abs/1901.06802,", "year": 2019 }, { "authors": [ "Lars Mescheder", "Michael Oechsle", "Michael Niemeyer", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Occupancy networks: Learning 3d reconstruction in function space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "William E. Lorensen", "Harvey E. Cline" ], "title": "Marching cubes: A high resolution 3d surface construction algorithm", "venue": "SIGGRAPH, pages 163–169", "year": 1987 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Bert De Brabandere", "Xu Jia", "Tinne Tuytelaars", "Luc Van Gool" ], "title": "Dynamic filter networks", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Benjamin Klein", "Lior Wolf", "Yehuda Afek" ], "title": "A dynamic convolutional layer for short rangeweather prediction", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2015 }, { "authors": [ "Gernot Riegler", "Samuel Schulter", "Matthias Ruther", "Horst Bischof" ], "title": "Conditioned regression models for non-blind single image super-resolution", "venue": "In 2015 IEEE International Conference on Computer Vision (ICCV)", "year": 2015 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G Kim", "Bryan C Russell", "Mathieu Aubry" ], "title": "A papier-mâché approach to learning 3d surface generation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "H. Sebastian Seung" ], "title": "Continuous attractors and oculomotor control", "venue": "Neural Networks,", "year": 1998 }, { "authors": [ "Maxim Tatarchenko", "Stephan R Richter", "René Ranftl", "Zhuwen Li", "Vladlen Koltun", "Thomas Brox" ], "title": "What do single-view 3d reconstruction networks learn", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Stephan R. Richter", "Stefan Roth" ], "title": "Matryoshka networks: Predicting 3d geometry via nested shape layers. In CVPR, pages 1936–1944", "venue": "IEEE Computer Society,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Training pruned neural networks", "venue": "CoRR, abs/1803.03635,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Selim Engin", "Eric Mitchell", "Daewon Lee", "Volkan Isler", "Daniel D Lee" ], "title": "Higher order function networks for view planning and multi-view reconstruction", "venue": "In International Conference on Robotics and Automation (ICRA)", "year": 2020 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Xiuming Zhang", "Zhoutong Zhang", "William T Freeman", "Joshua B Tenenbaum" ], "title": "Learning shape priors for single-view 3d completion and reconstruction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Lin" ], "title": "2018) in the context of path planning around the reconstructed model", "venue": null, "year": 2018 }, { "authors": [ "Lin" ], "title": "CLASS BREAKDOWN FOR LVC EXPERIMENT CHAMFER DISTANCE SCORES Table 7 shows class-wise Chamfer Distances for HOF and the baseline methods in the LVC experiment. Table 7: Class-weighted asymmetric Chamfer distance results for our method compared to other recent methods for 3D reconstruction from images", "venue": null, "year": 2018 }, { "authors": [ "Lin" ], "title": "Published as a conference paper at ICLR 2020 B TRAINING/TESTING DATASET AND IMPLEMENTATION DETAILS In the reconstruction experiment, Chamfer Distance scores are scaled up by 100 as in Lin et al. (2018) for easier comparison. For the numbers reported in Table 7, we use the best performance of 3d-r2n2 (5 views as reported", "venue": null, "year": 2018 }, { "authors": [ "Yang" ], "title": "2019), we focus on efficiency of representation rather than reconstruction quality. The performance comparison in Figure 3 and the ablation experiment in Tables 3 attempt to compare these architectures in this way (FoldingNet is a slightly shallower version of the DeepSDF architecture; 6 rather than 8 fully-connected layers)", "venue": null, "year": 2019 }, { "authors": [ "Yan" ], "title": "We use two different datasets for evaluation. First, in Table 1, we use the ShapeNet train/validation/test splits of a subset of the ShapeNet dataset (Chang et al., 2015", "venue": null, "year": 2016 }, { "authors": [ "Tatarchenko" ], "title": "2019) dataset Category AtlasNet OGN Matryoshka Retrieval Oracle NN HOF (Ours) airplane", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘mapping’ network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects. ‡" }, { "heading": "1 INTRODUCTION", "text": "This paper is primarily concerned with the problem of learning compact 3D object representations and estimating them from images. If we consider an object to be a continuous surface in R3, it is not straightforward to directly represent this infinite set of points in memory. In working around this problem, many learning-based approaches to 3D object representation suffer from problems related to memory usage, computational burden, or sampling efficiency. Nonetheless, neural networks with tens of millions of parameters have proven effective tools for learning expressive representations of geometric data. In this work, we show that object geometries can be encoded into neural networks with thousands, rather than millions, of parameters with little or no loss in reconstruction quality.\nTo this end, we propose an object representation that encodes an object as a function that maps points from a canonical space, such as the unit sphere, to the set of points defining the object. In this work, the function is approximated with a small multilayer perceptron. The parameters of this function are estimated by a ‘higher order’ encoder network, thus motivating the name for our method: Higher-Order Function networks (HOF). This procedure is shown in Figure 1. There are two key ideas that distinguish HOF from prior work in 3D object representation learning: fast-weights object encoding and interpolation through function composition.\n(1) Fast-weights object encoding: ‘Fast-weights’ in this context generally refers to methods that use network weights and biases that are not fixed; at least some of these parameters are estimated on a per-sample basis. Our fast-weights approach stands in contrast to existing methods which encode objects as vector-valued inputs to a decoder network with fixed weights. Empirically, we find that our approach enables a dramatic reduction (two orders of magnitude) in the size of the mapping network compared to the decoder networks employed by other methods.\n(2) Interpolation through function composition: Our functional formulation allows for interpolation between inputs by composing the roots of our reconstruction functions. We demonstrate that the\n1Stanford University 2Samsung AI Center - New York 3University of Minnesota †Work performed while an intern at Samsung AI Center - New York. ‡ See https://saic-ny.github.io/hof for code and additional information. Correspondence to: Eric Mitchell <eric.mitchell@cs.stanford.edu>.\nfunctional representation learned by HOF provides a rich latent space in which we can ‘interpolate’ between objects, producing new, coherent objects sharing properties of the ‘parent’ objects.\nIn order to position HOF among other methods for 3D reconstruction, we first define a taxonomy of existing work and show that HOF provides a generalization of current best-performing methods. Afterwards, we demonstrate the effectiveness of HOF on the task of 3D reconstruction from an RGB image using a subset of the ShapeNet dataset (Chang et al., 2015). The results, reported in Tables 1 and 2 and Figure 2, show state-of-the-art reconstruction quality using orders of magnitude fewer parameters than other methods." }, { "heading": "2 RELATED WORK", "text": "The selection of object representation is a crucial design choice for methods addressing 3D reconstruction. Voxel-based approaches (Choy et al., 2016; Häne et al., 2017) typically use a uniform discretization of R3 in order to extend highly successful convolutional neural network (CNN) based approaches to three dimensions. However, the inherent sparsity of surfaces in 3D space make voxelization inefficient in terms of both memory and computation time. Partition-based approaches such as octrees (Tatarchenko et al., 2017; Riegler et al., 2017) address the space efficiency shortcomings of voxelization, but they are tedious to implement and more computationally demanding to query. Graph-based models such as meshes (Wang et al., 2018; Gkioxari et al., 2019; Smith et al., 2019; Hanocka et al., 2019) provide a compact representation for capturing topology and surface level information, however their irregular structure makes them harder to learn. Point set representations, discrete (and typically finite) subsets of the continuous geometric object, have also gained popularity due to the fact that they retain the simplicity of voxel based methods while eliminating their storage and computational burden (Qi et al., 2017a; Fan et al., 2017; Qi et al., 2017b; Yang et al., 2018; Park et al., 2019). The PointNet architecture (Qi et al., 2017a;b) was an architectural milestone that made manipulating point sets with deep learning methods a competitive alternative to earlier approaches; however, PointNet is concerned with processing, rather than generating, point clouds. Further, while point clouds are more flexible than voxels in terms of information density, it is still not obvious how\nto adapt them to the task of producing arbitrary- or varied-resolution predictions. Independently regressing each point in the point set requires additional parameters for each additional point (Fan et al., 2017; Achlioptas et al., 2018), which is an undesirable property if the goal is high-resolution point clouds.\nMany current approaches to representation and reconstruction follow an encoder-decoder paradigm, where the encoder and decoder both have learned weights that are fixed at the end of training. An image or set of 3D points is encoded as a latent vector ‘codeword’ either with a learned encoder as in Yang et al. (2018); Lin et al. (2018); Yan et al. (2016) or by direct optimization of the latent vector itself with respect to a reconstruction-based objective function as in Park et al. (2019). Afterwards, the latent code is decoded by a learned decoder into a reconstruction of the desired object by one of two methods, which we call direct decoding and contextual mapping. Direct decoding methods directly map the latent code into a fixed set of points (Choy et al., 2016; Fan et al., 2017; Lin et al., 2018; Michalkiewicz et al., 2019); contextual mapping methods map the latent code into a function that can be sampled or otherwise manipulated to acquire a reconstruction (Yang et al., 2018; Park et al., 2019; Michalkiewicz et al., 2019; Mescheder et al., 2019). Direct decoding methods generally suffer from the limitation that their predictions are of fixed resolution; they cannot be sampled more or less precisely. With contextual mapping methods, it is possible in principle to sample the object to arbitrarily high resolution with the correct decoder function. However, sampling can provide a significant computational burden for some contextual mapping approaches as those proposed by Park et al. (2019) and Michalkiewicz et al. (2019). Another hurdle is the need for post-processing such as applying the Marching Cubes algorithm developed by Lorensen and Cline (1987). We call contextual mapping approaches that encode context by concatenating a duplicate of a latent context vector with each input latent vector concatenation (LVC) methods. In particular, we compare with LVC architectures used in FoldingNet (Yang et al., 2018) and DeepSDF (Park et al., 2019).\nHOF is a contextual mapping method that distinguishes itself from other methods within this class through its approach to representing the mapping function: HOF uses one neural network to estimate the weights of another. Conceptually related methods have been previously studied under nomenclature such as the ‘fast-weight’ paradigm (Schmidhuber, 1992; De Brabandere et al., 2016; Klein et al., 2015; Riegler et al., 2015) and more recently ‘hypernetworks’ (Ha et al., 2016). However, the work by Schmidhuber (1992) deals with encoding memories in sequence learning tasks. Ha et al. (2016) suggest that estimating weights of one network with another might lead to improvements in parameter-efficiency. However, this work does not leverage the key insight of using network parameters that are estimated per sample in vision tasks." }, { "heading": "3 HIGHER-ORDER FUNCTION NETWORKS", "text": "HOF is motivated by the independent observations by both Yang et al. (2018) and Park et al. (2019) that LVC methods do not perform competitively when the context vector is injected by simply concatenating it with each input. In both works, the LVC methods proposed required architectural workarounds to produce sufficient performance on reconstruction tasks, including injecting the latent code multiple times at various layers in the network. HOF does not suffer from these shortcomings due to its richer context encoding (the entire mapping network encodes context) in comparison with LVC. We compare the HOF and LVC regimes more precisely in Section 3.2. Quantitative comparisons of HOF with existing methods can be found in Table 1." }, { "heading": "3.1 A FAST-WEIGHTS APPROACH TO 3D OBJECT REPRESENTATION AND RECONSTRUCTION", "text": "We consider the task of reconstructing an object point cloud O from an image. We start by training a neural network gφ with parameters φ (Figure 1, top-left) to output the parameters θ of a mapping function fθ, which reconstructs the object when applied to a set of points X sampled uniformly from a canonical set such as the unit sphere (Figure 1, top-right). We note that the number of samples in X can be increased or decreased to produce higher or lower resolution reconstructions without changing the network architecture or retraining, in contrast with direct decoding methods and some contextual mapping methods which use fixed, non-random samples from X (Yang et al., 2018). The input to gφ is an RGB image I; our implementation takes 64× 64× 3 RGB images as input, but our method is general to any input representation for which a corresponding differentiable encoder network can be constructed to estimate θ (e.g. PointNet (Qi et al., 2017a) for point cloud completion). Given I , we compute the parameters of the mapping network θI as\nθI = gφ(I) (1)\nThat is, the encoder gφ : R3×64×64 → Rd directly regresses the d-dimensional parameters θI of the mapping network fθI : Rc → R3, which maps c-dimensional points in the canonical space X to points in the reconstruction Ô (see Figure 1). We then transform our canonical space X with fθI in the same manner as other contextual mapping methods:\nÔ = {fθI (xi) : xi ∈ X} (2)\nDuring training, we sample an image I and the corresponding ground truth point cloud model O, where O contains 10,000 points sampled from the surface of the true object. We then obtain the mapping fθI = gφ(I) and produce an estimated reconstruction of O as in Equation 2. In our training, we only compute fθI (x) for a sample of 1000 points in X . However, we find that sampling many more points (10-100× as many) at test time still yields high-quality reconstructions. This sample is drawn from a uniform distribution over the set X . We then compute a loss for the prediction Ô using a differentiable set similarity metric such as Chamfer distance or Earth Mover’s Distance. We focus on the Chamfer distance as both a training objective and metric for assessing reconstruction quality. The asymmetric Chamfer distance CD(X,Y ) is often used for quantifying the similarity of two point sets X and Y and is given as\nCD(X,Y ) = 1 |X| ∑ xi∈X min yi∈Y ||xi − yi||22 (3)\nThe Chamfer distance is defined even if sets X and Y have different cardinality. We train gφ to minimize the symmetric objective function `(Ô, O) = CD(Ô, O) + CD(O, Ô) as in Fan et al. (2017)." }, { "heading": "3.2 COMPARISON WITH LVC METHODS", "text": "We compare our mapping approach with LVC architectures such as DeepSDF (Park et al., 2019) and FoldingNet (Yang et al., 2018). These architectures control the output of the decoder through the concatenation of a latent ‘codeword’ vector z with each input xi ∈ X . The codeword is estimated by an encoder gφLVC for each image. We consider the case in which the latent vector is only concatenated\nwith inputs in the first layer of the decoder network fθ, which we assume to be an MLP. We are interested in analyzing the manner in which the network output with respect to xi may be modulated by varying z.\nIf the vector ai contains the pre-activations of the first layer of fθ given an input point xi, we have ai =W xxi +W zz+ b\nwhere Wx, W z, and b are fixed parameters of the decoder, and only z is a function of I . If we absorb the parameters W z and b into the encoder parameters φLVC (as W z and b are fixed for all xi), we can define a new, equivalent latent representation b∗ = W zz + b = W zgφLVC(I) + b and a new encoder function h with parameters φLVC ∪ {W z,b} such that h(I) = b∗. This gives\nai =W xxi + h(I)\nThus the LVC approach is equivalent to estimating a fixed subset of the parameters θ of the decoder fθ on a per-sample basis (the first layer bias). From this perspective, HOF is an intuitive generalization: rather than estimating just the first layer bias, we allow our encoder to modulate all of the parameters in the decoder fθ on a per-sample basis.\nHaving demonstrated HOF as a generalization of existing contextual mapping methods, in the next section, we present a novel application of contextual mapping that leverages the compositionality of the estimated mapping functions to aggregate features of multiple objects or multiple viewpoints of the same object." }, { "heading": "3.3 EXTENDING CONTEXTUAL MAPPING METHODS: FEATURE AGGREGATION THROUGH FUNCTION COMPOSITION", "text": "An advantageous property of methods that use a latent codeword is that they have been empirically shown to learn a meaningful space of object geometries, in which interpolating between object encodings gives new, coherent object encodings (Fan et al., 2017; Yang et al., 2018; Park et al., 2019; Groueix et al., 2018). HOF, on the other hand, does not obviously share this property: interpolating between the mapping function parameters estimated for two different objects need not yield a new, coherent object as the prior work has shown that the solution space of ‘good’ neural networks is highly non-convex (Li et al., 2018). We demonstrate empirically in Figure 7 that naively interpolating between reconstruction function in the HOF regime does indeed produce meaningless blobs. However, with a small modification to the HOF formulation in Equation 2, we can in fact learn a rich space of functions in which we can interpolate between objects through function composition.\nWe extend the formulation in Equation 2 to one where an object is represented as the k-th power of the mapping fθI : Ô = {fkθI (x) : x ∈ X} (4) where fk is defined as the composition of f with itself (k − 1) times: fk(x) = f(f (k−1)(x)) where f0(x) , x. We call a mapping fθI whose k-th power reconstructs the object O in image I the k-mapping for O.\nThis modification to Equation 2 adds an additional constraint to the mapping: the domain and codomain must be the same. However, evaluating powers of f leverages the power of weight sharing in neural network architectures; for an MLP mapping architecture with l layers (excluding the input layer), evaluating its k-th power is equivalent to an MLP with l × k layers with shared weights. This formulation also has connections to earlier work on continuous attractor networks as a model for encoding memories in the brain as k becomes large (Seung, 1998).\nIn Section 4.3, we conduct experiments in a setting in which we have acquired RGB images I and J of two objects, OI and OJ , respectively. Applying our encoder to these images, we obtain k-mappings fθI and fθJ , which have parameters θI = gφ(I) and θJ = gφ(J), respectively. We hypothesize that we can combine the information contained in each mapping function fθi by evaluating any of the 2 k possible functions of the form: finterp = (fθ1 ◦ ... ◦ fθk) (5) where the parameters of each mapping fθi are either the parameters of fθI or fθJ . Figures 4 and 7 show that interpolation with function composition provides interesting, meaningful outputs in experiments with k = 2 and k = 4." }, { "heading": "4 EXPERIMENTAL EVALUATIONS", "text": "We conduct various empirical studies in order to justify two key claims. In Sections 4.1 and 4.2, we compare with other contextual mapping architectures to demonstrate that HOF provides equal or better performance with a significant reduction in parameters and compute time. In Section 4.3 we demonstrate that extending contextual mapping approaches such as HOF with multiple compositions of the mapping function provides a simple and effective approach to object interpolation. Further experimentation, including ablation studies and a simulated robot navigation scenario, can be found in A.7." }, { "heading": "4.1 EVALUATING RECONSTRUCTION QUALITY ON SHAPENET", "text": "We test HOF’s ability to reconstruct a 3D point cloud of an object given a single RGB image, comparing with other architectures for 3D reconstruction. We conduct two experiments:\n1. We compare HOF with LVC architectures to show that HOF is a more parameter-efficient approach to contextual mapping than existing fixed-decoder architectures.\n2. We compare HOF to a broader set of state of the art methods on a larger subset of the ShapeNet dataset, demonstrating that it matches or surpasses existing state of the art methods in terms of reconstruction quality.\nIn the first experiment, we compare HOF with other LVC architectures on 13 of the largest classes of ShapeNet (Yan et al., 2016), using the asymmetric Chamfer distance metrics (Equation 3) as reported in Lin et al. (2018). In the second experiment, we compare HOF with other methods for 3D reconstruction on a broader selection of 55 classes the ShapeNet dataset, as in Tatarchenko et al. (2019). In line with the recommendations in Tatarchenko et al. (2019), we report F1 scores for this evaluation." }, { "heading": "4.1.1 LVC ARCHITECTURE COMPARISON", "text": "For this experiment, we compare HOF with LVC decoder architectures proposed in the literature, specifically those used in DeepSDF (Park et al., 2019) and FoldingNet (Yang et al., 2018), as well as several other baselines. Each architecture maps points from Rc to R3 in order to enable a direct comparison. The dataset contains 31773 ground truth point cloud models for training/validation and 7926 for testing. For each point cloud, there are 24 RGB renderings of the object from a fixed set of 24 camera positions. For both training and testing, each point cloud is shifted so that its bounding box center is at the origin in line with Fan et al. (2017). At test time, there is no post-processing performed on the predicted point cloud. The architectures we compare in this experiment are:\n1. HOF-1: 1 hidden layer containing 1024 hidden neurons\n2. HOF-3: 3 hidden layers containing 128 hidden neurons\n3. DeepSDF as described in Park et al. (2019), with 8 hidden layers containing 512 neurons each\n4. FoldingNet as described in Yang et al. (2018), with 2 successive ‘folds’, each with a 3-layer MLP with 512 hidden neurons\n5. EPCG architecture as reported in Lin et al. (2018)\n6. Point Set Generation network (Fan et al., 2017) as reported in Lin et al. (2018)\n7. 3D-R2N2 (Choy et al., 2016) as reported in Lin et al. (2018)\nResults are reported in Table 1. Chamfer Distance scores are scaled by 100 as in line with Lin et al. (2018). We find that HOF performs significantly better than that direct decoding baseline of Lin et al. (2018) and performs on par with other contextual mapping approaches with 30× fewer parameters. In order to provide a fair comparison with the baseline method, we ensure that ground truth objects are scaled identically to those in Lin et al. (2018). We report both ‘forward’ Chamfer distance CD(Pred, Target) and ‘backward’ Chamfer distance CD(Target, Pred), again in line with the convention established by Lin et al. (2018). Table 7 contains a class-wise breakdown. Qualitative comparisons of the outputs of HOF with state-of-the-art architectures are shown in Figure 2." }, { "heading": "4.1.2 SHAPENET BREADTH COMPARISON", "text": "Tatarchenko et al. (2019) question the common practice in single-view 3D reconstruction of evaluating only on the largest classes of ShapeNet. The authors demonstrate that reconstruction\nTable 2: F1/class-weighted F1 score comparison of HOF with methods reported in Tatarchenko et al. (2019). Higher is better.\nMethod F1 cw-F1 Oracle NN 0.290 0.321\nAtlasNet 0.252 0.287\nOGN 0.217 0.230\nMatryoshka 0.264 0.297\nRetrieval 0.237 0.260\nHOF (Ours) 0.291 0.310\nmethods do not exhibit performance correlated with the size of object classes, and thus evaluating on smaller ShapeNet classes is justified. We use the dataset provided by Tatarchenko et al. (2019), which includes 55 classes from the ShapeNet dataset. The authors also suggest using the F1 score metric, defined as the harmonic mean between precision and recall (Tatarchenko et al., 2019).\nWe include comparisons with AtlasNet (Groueix et al., 2018), Octree Generating Networks (Tatarchenko et al., 2017), Matryoshka Networks (Richter and Roth, 2018), and retrieval baselines as reported by Tatarchenko et al. (2019). We find that HOF performs competitively with all of these state-of-the-art methods, even surpassing them on many classes. We show summary statistics in Table 2. The F1 column contains the average F1 score for each method, uniformly averaging over all classes regardless of how class imbalances in the testing set. The cw-F1 score column averages over class F1 scores weighted by the fraction of the dataset comprised by that class; that is, classes that are over-represented in the testing set are correspondingly over-represented in the cw-F1 score. On the mean F1 metric, HOF outperforms all other methods, including the ‘Oracle Nearest-Neighbor’ approach described by Tatarchenko et al. (2019). The Oracle Nearest Neighbor uses the closest object in the training set as its prediction for each test sample. A complete classwise performance breakdown is in the Appendix in Table 8. We find that HOF outperforms all 5 comparison methods (including the Oracle) in 23 of the 55 classes. Excluding the oracle, HOF shows the best performance in 28 of the 55 classes." }, { "heading": "4.2 RUNTIME PERFORMANCE COMPARISON", "text": "We compare HOF with the decoder architectures proposed in Park et al. (2019) and Yang et al. (2018) in terms of inference speed. Figure 3 shows the results of this experiment, comparing how long it takes for each network to map a set ofN samples from the canonical spaceX into the object reconstruction. We ignore the processing time for estimating the latent state z for DeepSDF/FoldingNet and the function parameters θ for HOF; we use the same convolutional neural network architecture with a modified output layer for both. We find that even for medium-resolution reconstructions (N > 1000), the GPU running times for the DeepSDF/FoldingNet architectures and HOF begin to diverge. This difference is even more noticeable in the CPU running time comparison (an almost 100× difference). This performance improvement may be significant for embedded systems that need to efficiently\nstore and reconstruct 3D objects in real time; our representation is small in size, straightforward to sample uniformly (unlike a CAD model), and fast to evaluate." }, { "heading": "4.3 OBJECT INTERPOLATION", "text": "To demonstrate that our functional representation yields an expressive latent object space, we show that the composition of these functions produces interesting, new objects. The top of Figure 4 shows in detail the composition procedure. If we have estimated 2-mappings for two objects OA and OB , we demonstrate that fθA(fθB (X)) and fθB (fθA(X)) both provide interesting mixtures of the two objects and mix the features of the objects in different ways; the functions are not commutative. This approach is conceptually distinct from other object interpolation methods, which decode the interpolation of two different latent vectors. In our formulation, we visualize the outputs of an encoder that has been trained to output 2-mappings in R3. In addition, the bottom of Figure 4 demonstrates a smooth gradient of compositions of the reconstruction functions for two airplanes, when a higher order of mappings (k = 4) is used.\nTo further convey the expressiveness of the composition-based object interpolation, we compare it against a method that performs interpolation in the network parameter space. This latter approach resembles a common way of performing object interpolation in LVC methods: Generate latent codewords from each image, and synthesize new objects by feeding the interpolated latent vectors into the decoder. As a proxy for the latent vector interpolation used in LVC methods, we generate new objects as follows. After outputting the network parameters θA and θB for the objects OA and OB , we use the interpolated parameters θ′ = (θA + θB)/2 to represent the mapping function. In Figure 7, we show that our composition-based interpolation is more capable of generating coherent new objects whose geometric features inherited from the source objects are preserved better." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We presented Higher Order Function Networks (HOF), which generate a functional representation of an object from an RGB image. The function can be represented as a small MLP with ‘fast-weights’, or weights that are output by an encoder network gφ with learned, fixed weights. HOF demonstrates state-of-the-art reconstruction quality, as measured by Chamfer distance and F1 score with ground truth models, with far fewer decoder parameters than existing methods. Additionally, we extended contextual mapping methods to allow for interpolation between objects by composing the roots of their corresponding mapping functions. Another advantage of HOF is that points on the surface can be sampled directly, without expensive post-processing methods such as estimating level sets.\nFor future work, we would like to further improve on the parameter-efficiency of HOF, for example with versions of HOF that output only a sparse but flexible subset of the parameters of the mapping function. In addition, connections with other works investigating the properties of ‘high-quality’\nneural network parameters and initializations such as HyperNetworks (Ha et al., 2016), the Lottery Ticket Hypothesis (Frankle and Carbin, 2018), and model-agnostic meta learning (Finn et al., 2017).\nThere are also many interesting applications of HOF in domains such as robotics. A demonstrative application in motion planning can be found in Appendix B.2.2, and Engin et al. (2020) explore extensions of HOF for multi-view reconstruction and motion planning. Using functional representations directly for example for manipulation or navigation tasks, rather than generating intermediate 3D point clouds, is also an interesting avenue of future work. We hope that the ideas presented in this paper provides a basis for future developments of efficient 3D object representations and neural network architectures." }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "A.1 ARCHITECTURAL VARIATIONS", "text": "We compare HOF trained on the subset of ShapeNet used in Lin et al. (2018) with several architectural variations, including using Resnet18 rather than our own encoder architecture, using a fast-weights decoder architecture with 6, rather than 3, hidden layers, and using tanh rather than relu in the fast-weights decoder. Results are summarized in Table 3. We find that HOF is competitive in all of the formulations, although using the tanh activation function in the decoder function instead of the relu shows a small degradation in performance. Future work might investigate more deeply what principles underlie the design of fast-weights architectures." }, { "heading": "A.2 THE ROLE OF THE SAMPLING DOMAIN", "text": "In the experimental results reported in the main paper, we sample input points from the interior of the unit sphere uniformly at random. However, we might sample other topologies as input to the mapping function. In this experiment, we compare the performance of HOF when we sample from the interior of the 3D sphere (standard configuration), the surface of the 3D sphere, the interior of the 3D cube, and the interior of the 4D sphere." }, { "heading": "A.3 COMPARING DIFFERENT VALUES OF K FOR COMPOSITION", "text": "Here, we compare the performance of various instances of HOF when we vary the value of k, or number of self-compositions. We use the HOF-1 decoder architecture (1 hidden layer with 1024 neurons) and a Resnet18 encoder architecture. This encoder architecture is different from the baseline encoder used for the results in Table 1, which explains deviations in performance from those numbers. Although performance does not vary significantly, there are some differences in reconstruction quality across values of k. Future work might investigate how best to take use of self-compositional or recurrent decoder architectures to maximize both computational performance as well as accuracy." }, { "heading": "A.4 PROJECTION REGULARIZATION", "text": "Intuitively, we might expect that fθ(X) would approximate the Euclidean projection function; e.g. fθ(X) ≈ ProjY (X). However, qualitatively, we find that this is not the case. Our mapping fθ learns a less interpretable mapping from the canonical set X to the object O. In order to encourage the mapping to produce a more interpretable mapping from the canonical set X to the object O, we can regularize the transform fθ to penalize the ‘distance traveled’ of points transformed by fθ. A regularization term with a small coefficient (λ = 0.01) is effective in encouraging this behavior.\nMaking this change results in little deviation in performance, while providing a more coherent mapping. Figure 5 highlights this distinction.\nThis penalty for the mapping computed by fθI for each point in the sample X̃ is given as\nR(fθI , X̃) = 1\nX̃ ∑ xi∈X̃ ||fθI (xi)− xi||22 (6)\nwhere X̃ is a sample from the canonical set X . We might instead directly penalize the difference between fθI and the Euclidean projection over the sampled set X̃ as:\nR(fθI , X̃) = 1\nX̃ ∑ xi∈X̃ ||fθI (xi)− argminoi∈O||oi − xi||2|| 2 2 (7)\nHowever, we find that this regularization can be overly constraining, for example in cases where points are sampled near the boundaries of the Voronoi tesellation of the target point cloud. The formulation in Equation 6 gives the mapping greater flexibility while still giving the desired semantics." }, { "heading": "A.5 COLLISION-FREE PATH GENERATION", "text": "From Chamfer distance or F1 scores alone, it is difficult to determine if one method’s reconstruction quality is meaningfully different from another’s. In this section, we introduce a new benchmark to evaluate the practical implications of using 3D reconstructions for collision-free path generation. We compare the reconstructions of HOF with Lin et al. (2018), which is the most competitive direct decoding method according to Table 1. This experiment is intended to give an additional perspective on what a difference in average Chamfer distance to the ground truth object means. We show that given an RGB image, we can efficiently find a near-optimal path P̂ between two points around the bounding sphere of the object without colliding with it, and without taking a path much longer than the optimal path P ∗, where the optimal path is defined as the shortest collision-free path between two given points. A complete definition of the experiment and its implementation are given in Section B.2.2.\nWe quantify the quality of our predictions by measuring both i) the proportion of predicted paths P̂ that are collision-free and ii) the average ratio of the length of a collision-free estimated path P̂ and the corresponding optimal path P ∗.\nThese two metrics conceptually mirror the backward and forward Chamfer distance metrics, respectively; a low collision rate corresponds to few missing structures in the reconstructed object (backward Chamfer, or surface coverage), while successful paths close to the optimal path length correspond to few extraneous structures in the reconstruction (forward Chamfer, or shape similarity).\nWe find that HOF provides meaningful gains over the reconstruction method recently proposed in Lin et al. (2018) in the context of path planning around the reconstructed model. HOF performs significantly better both in terms of path length as well as collision rate. However, although in Lin et al. (2018) results were reported on the reconstruction task with objects in a canonical frame, in the context of robotics, learning in a viewer-centric frame is necessary. It has been noted in Wu et al. (2018) that generalization might be easier when learning reconstruction in a viewer-oriented frame. We test this theory by training on both objects in their canonical frame as well as in the ‘camera’ frame. We rotate each point cloud Y into its camera frame orientation using the azimuth and elevation values for each image. We rotate the point cloud about the origin, keeping the bounding box centered at (0,0,0). Trained and tested in the viewer-centric camera frame, HOF performs even better than in the canonical frame, giving Chamfer distance scores of 1.486 / 0.979 (compared with 1.534 / 1.046 for the canonical frame). The most notably improved classes in the viewer-centric evaluation are cabinets and loudspeakers; it is intuitive that these particularly ‘boxy’ objects might be better reconstructed in a viewer-centric frame, as their symmetric nature might make it difficult to identify their canonical frame from a single image.\nResults of this comparison, as well as other ablation studies, are reported in Supplementary Tables 3 and 4. The path quality performances of the baseline metrics, EPCG Lin et al. (2018) and HOF are presented in Table 6." }, { "heading": "A.6 PARAMETER INTERPOLATION VS FUNCTION COMPOSITION", "text": "In order to illustrate the non-trivial mappings learned by HOF for k > 1, we compare the reconstructions acquired from interpolating naively between decoder parameters of two different objects and the reconstructions acquired by composing the two reconstruction functions. Results are shown in Figure 7." }, { "heading": "A.7 FULL CLASS BREAKDOWN FOR LVC EXPERIMENT CHAMFER DISTANCE SCORES", "text": "Table 7 shows class-wise Chamfer Distances for HOF and the baseline methods in the LVC experiment.\nTable 7: Class-weighted asymmetric Chamfer distance results for our method compared to other recent methods for 3D reconstruction from images as reported in Lin et al. (2018). We use the HOF-3 architecture with k = 1.\nCategory 3D-R2N2 PSG EPCG HOF (Ours) Airplane 2.399 / 2.391 1.301 / 1.488 1.294 / 1.541 0.936 / 0.723 Bench 2.323 / 2.603 1.814 / 1.983 1.757 / 1.487 1.288 / 0.914 Cabinet 1.420 / 2.619 2.463 / 2.444 1.814 / 1.072 1.764 / 1.383 Car 1.664 / 3.146 1.800 / 2.053 1.446 / 1.061 1.367 / 0.810 Chair 1.854 / 3.080 1.887 / 2.355 1.886 / 2.041 1.670 / 1.147 Display 2.088 / 2.953 1.919 / 2.334 2.142 / 1.440 1.765 / 1.130 Lamp 5.698 / 7.331 2.347 / 2.212 2.635 / 4.459 2.054 / 1.325 Loudspeaker 2.487 / 4.203 3.215 / 2.788 2.371 / 1.706 2.126 / 1.398 Rifle 4.193 / 2.447 1.316 / 1.358 1.289 / 1.510 1.066 / 0.817 Sofa 2.306 / 3.196 2.592 / 2.784 1.917 / 1.423 1.666 / 1.064\nTable 2.128 / 3.134 1.874 / 2.229 1.689 / 1.620 1.377 / 0.979 Telephone 1.874 / 2.734 1.516 / 1.989 1.939 / 1.198 1.387 / 0.944 Watercraft 3.210 / 3.614 1.715 / 1.877 1.813 / 1.550 1.474 / 0.967\nMean 2.588 / 3.342 1.982 / 2.146 1.846 / 1.701 1.534 / 1.046" }, { "heading": "A.8 FULL CLASS BREAKDOWN FOR BROAD EXPERIMENT F1 SCORES", "text": "Table 8 shows class-wise performance of HOF as well as each method compared in the broad ShapeNet comparison as reported by Tatarchenko et al. (2019)." }, { "heading": "B TRAINING/TESTING DATASET AND IMPLEMENTATION DETAILS", "text": "In the reconstruction experiment, Chamfer Distance scores are scaled up by 100 as in Lin et al. (2018) for easier comparison. For the numbers reported in Table 7, we use the best performance of 3d-r2n2 (5 views as reported in Lin et al. (2018)). In comparing with methods like FoldingNet Yang et al. (2018) and DeepSDF Park et al. (2019), we focus on efficiency of representation rather than reconstruction quality. The performance comparison in Figure 3 and the ablation experiment in Tables 3 attempt to compare these architectures in this way (FoldingNet is a slightly shallower version of the DeepSDF architecture; 6 rather than 8 fully-connected layers)." }, { "heading": "B.1 DATASET", "text": "We use two different datasets for evaluation. First, in Table 1, we use the ShapeNet train/validation/test splits of a subset of the ShapeNet dataset (Chang et al., 2015) described in Yan et al. (2016). The dataset can be downloaded from https://github.com/xcyan/nips16_PTN. Point clouds have 100k points. Upon closer inspection, we have found that this subset includes some inconsistent/noisy labels, including:\n1. Inconsistency of object interior filling (e.g. some objects are only surfaces, while some have densely sampled interiors)\n2. Objects with floating text annotations that are represented in the point cloud model 3. Objects that are inconsistently small (scaled down by a factor of 5 or more compared to\nother similar objects)\nAlthough these types of inconsistencies are rare, they are noteworthy. We used them as-is, but future contributions might include both ‘cleaned’ and ‘noisy’ variants of this dataset. Learning from noisy labels is an important problem but is orthogonal to 3D reconstruction.\nIn our second experiment, we use a broader dataset based on ShapeNet, with train and test splits taken from Tatarchenko et al. (2019). The dataset can be downloaded from https://github.com/lmb-freiburg/what3d.\nB.2 IMPLEMENTATION DETAILS" }, { "heading": "B.2.1 NETWORK ARCHITECTURE AND TRAINING", "text": "For the problem of 3D reconstruction from an RGB image, which we address here, we represent gφ as a convolutional neural network based on the DenseNet architecture proposed in Huang et al. (2017). We call this our baseline encoder network. The baseline encoder network has 3 dense blocks (each containing 4 convolutional layers) followed by 3 fully connected layers. The schedule of feature maps is [16, 32, 64] for the dense blocks. Each fully connected layer contains 1024 neurons.\nWe use two variants of the mapping architecture fθ. One, which we call HOF-1, is an MLP with 1 hidden layer containing 1024 neurons. A second version, HOF-3, is an MLP with 3 hidden layers, each containing 128 hidden units. Both formulations use the ReLU activation function Glorot et al. (2011). Because gφ and fθ are all differentiable almost everywhere, we can train the entire system end-to-end with backpropagation. We use the Adam Optimizer with learning rate 1e-5 and batch size 1, training for 4 epochs for all experiments (1 epoch ≈ 725k parameter updates). Training HOF from scratch took roughly 36 hours." }, { "heading": "B.2.2 PATH PLANNING EXPERIMENT", "text": "We use the class ‘chair’ from the dataset described in Section 4.1 in our experiments. The objects from this class have considerable variation and complexity in shape, thus they are useful for evaluating the quality of the generated paths.\nPath planning is performed in a three dimensional grid environment. All the objects in our dataset fit inside the unit cube. Given the predicted point cloud of an object, we first voxelize the points by constructing an occupancy map V = n× n× n centered at the origin of the object with voxel size 2/n. Next, we generate start and end points as follows. We choose a unit vector v sampled uniformly\nat random and compute d = n/2 · v/||v||1. We use the end points of d and −d as the start and goal locations. For each method, we generate the paths with the A* algorithm using the voxelization of the predicted point clouds as obstacles, and the sampled start and goal positions. The movement is rectilinear in the voxel space and the distances are measured with the L1 metric. In the experiments we use an occupancy grid of size 32 × 32 × 32, and sample 100 start and goal location pairs per model.\nIn addition to the paths generated using the predictions from the EPCG Lin et al. (2018) and HOF methods, we present two other baselines (Figure 8). The first baseline Shortest L1 outputs the shortest path with the L1 metric ignoring the obstacles in the scene, and the second baseline Shortest Around Bounding Box (SABB) takes the bounding box of the ground truth voxels as the environment to generate the path.\nWe present the path generation results in Table 6. The baseline Shortest L1 gives the optimal solution when the path is collision-free. However, since most of the shortest paths go through the object, this baseline has a poor success rate performance. In contrast, SABB output paths are always collision-free as the shortest path is computed using the bounding box of the true voxelization as the obstacles in the environment. The length of the paths generated by SABB are longer compared to the rest of the methods since the produced paths are ‘cautious’ to not collide with the object. These two baselines are the best performers for the metric they are designed for, yet they suffer from the complementary metric. Our method on the other hand achieves almost optimal results in both metrics due to the good quality reconstructions." }, { "heading": "B.2.3 COMPUTING ENVIRONMENT", "text": "All GPU experiments were performed on NVIDIA GTX 1080 Ti GPUs. The CPU running times were computed on one of 12 cores of an Intel 7920X processor." } ]
2,020
null
SP:7f6ef5f3fa7627e799377aa06561904b80c5c1c4
[ "This paper proposes a novel direction for curriculum learning. Previous works in the area of curriculum learning focused on choosing easier samples first and harder samples later when learning the neural network models. This is problematic since we need to first compute how difficult each samples are, which introduces computational overheads. In this work, the paper propose to gradually learn with a class-wise perspective instead. The neural network has only access to the labels of certain classes (chosen randomly) in the beginning, and the samples that belong to the rest of the classes are treated as unseen samples but with a label forced into the last class. Then, the true labels of unseen classes are gradually revealed, and this is repeated until in the final incremental step, all labels are revealed. The method further has an adaptive compensation step, which use a less peaked distribution label for supervision only for the incorrectly predicted samples. The experiments show that with only the first step, the proposed method is worse than the original batch learning, but by adding the second label smoothing step, there is improvement over the original batch learning setup.", "This paper makes the observation that a curriculum need not depend on the difficulty of examples, as most (maybe all) prior works do. They suggest instead a curriculum based on learning one class at a time, starting with one and masking the label of all others as 'unknown' (i.e. treating them as negative examples), and unmasking classes as learning progresses. This is the \"incremental labels\" part. They make another observation, that label smoothing is applied to all examples regardless of difficulty, and propose an alternative \"adaptive\" version where labels are smoothed only for difficult examples. This is the \"adaptive compensation\" part." ]
Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum (Weinshall et al., 2018). While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), which takes a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we recursively reveal ground-truth labels in small installments while using a fake label for the remaining data. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to show the highest performance on CIFAR-10 for methods using simple data augmentation while exhibiting label-order invariance among other properties.
[]
[ { "authors": [ "Judith Avrahami", "Yaakov Kareev", "Yonatan Bogot", "Ruth Caspi", "Salomka Dunaevsky", "Sharon Lerner" ], "title": "Teaching by examples: Implications for the process of category acquisition", "venue": "The Quarterly Journal of Experimental Psychology Section A,", "year": 1997 }, { "authors": [ "Hessam Bagherinezhad", "Maxwell Horton", "Mohammad Rastegari", "Ali Farhadi" ], "title": "Label refinery: Improving imagenet classification through label progression", "venue": "arXiv preprint arXiv:1805.02641,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Maxime Bucher", "Stéphane Herbin", "Frédéric Jurie" ], "title": "Hard negative mining for metric learning based zero-shot classification", "venue": "Computer Vision – ECCV 2016 Workshops,", "year": 2016 }, { "authors": [ "Francisco M Castro", "Manuel J Marı́n-Jiménez", "Nicolás Guil", "Cordelia Schmid", "Karteek Alahari" ], "title": "End-to-end incremental learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Dipankar Das", "Sasikanth Avancha", "Dheevatsa Mudigere", "Karthikeyan Vaidynathan", "Srinivas Sridharan", "Dhiraj Kalamkar", "Bharat Kaul", "Pradeep Dubey" ], "title": "Distributed deep learning using synchronous stochastic gradient descent", "venue": "arXiv preprint arXiv:1602.06709,", "year": 2016 }, { "authors": [ "Dumitru Erhan", "Pierre-Antoine Manzagol", "Yoshua Bengio", "Samy Bengio", "Pascal Vincent" ], "title": "The difficulty of training deep architectures and the effect of unsupervised pre-training", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Xiang-Yang Li", "Tie-Yan Liu" ], "title": "Learning to teach", "venue": "arXiv preprint arXiv:1805.03643,", "year": 2018 }, { "authors": [ "Carlos Florensa", "David Held", "Markus Wulfmeier", "Michael Zhang", "Pieter Abbeel" ], "title": "Reverse curriculum generation for reinforcement learning", "venue": "arXiv preprint arXiv:1707.05300,", "year": 2017 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Ben Graham" ], "title": "Fractional max-pooling (2014)", "venue": "arXiv preprint arXiv:1412.6071,", "year": 2014 }, { "authors": [ "Alex Graves", "Marc G Bellemare", "Jacob Menick", "Remi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Guy Hacohen", "Daphna Weinshall" ], "title": "On the power of curriculum learning in training deep networks", "venue": "arXiv preprint arXiv:1904.03626,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels", "venue": "arXiv preprint arXiv:1712.05055,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Aaron Courville", "James Bergstra", "Yoshua Bengio" ], "title": "An empirical evaluation of deep architectures on problems with many factors of variation", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Xirong Li", "CeesG M Snoek", "Marcel Worring", "Dennis Koelma", "Arnold WM Smeulders" ], "title": "Bootstrapping visual categorization with relevant negatives", "venue": "IEEE Transactions on Multimedia,", "year": 2013 }, { "authors": [ "Senwei Liang", "Yuehaw Kwoo", "Haizhao Yang" ], "title": "Drop-activation: Implicit parameter reduction and harmonic regularization", "venue": "arXiv preprint arXiv:1811.05850,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Ken Nakae", "Shin Ishii" ], "title": "Distributional smoothing with virtual adversarial training", "venue": "arXiv preprint arXiv:1507.00677,", "year": 2015 }, { "authors": [ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Łukasz Kaiser", "Geoffrey Hinton" ], "title": "Regularizing neural networks by penalizing confident output distributions", "venue": "arXiv preprint arXiv:1701.06548,", "year": 2017 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Scott Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training deep neural networks on noisy labels with bootstrapping", "venue": "arXiv preprint arXiv:1412.6596,", "year": 2014 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy P Lillicrap", "Greg Wayne" ], "title": "Experience replay for continual learning", "venue": "arXiv preprint arXiv:1811.11682,", "year": 2018 }, { "authors": [ "Jonathan Schwarz", "Wojciech Czarnecki", "Jelena Luketina", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Daphna Weinshall", "Gad Cohen", "Dan Amir" ], "title": "Curriculum learning by transfer learning: Theory and experiments with deep networks", "venue": "arXiv preprint arXiv:1802.03796,", "year": 2018 }, { "authors": [ "Lingxi Xie", "Jingdong Wang", "Zhen Wei", "Meng Wang", "Qi Tian" ], "title": "Disturblabel: Regularizing cnn on the loss layer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yoshihiro Yamada", "Masakazu Iwamura", "Takuya Akiba", "Koichi Kise" ], "title": "Shakedrop regularization for deep residual learning", "venue": "arXiv preprint arXiv:1802.02375,", "year": 2018 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling sgd batch size to 32k for imagenet training", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks. In British Machine Vision Conference 2016", "venue": "British Machine Vision Association,", "year": 2016 }, { "authors": [ "Ke Zhang", "Miao Sun", "Tony X Han", "Xingfang Yuan", "Liru Guo", "Tao Liu" ], "title": "Residual networks", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples. However, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009); Larochelle et al. (2007)). The go-to solution in most cases is Stochastic Gradient Descent with mini-batches (simple batch learning) and its derivatives. While offering a standardized solution, simple batch learning often fails to find solutions that are simultaneously stable, highly generalizable and scalable to large systems (Das et al. (2016); Keskar et al. (2016); Goyal et al. (2017); You et al. (2017)). This is a by-product of how mini-batches are constructed. For example, the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution; small batch sizes help achieve more generalizable solutions, but do not scale as well to vast computational resources as large mini-batches. It is hard to construct a solution that is a perfect compromise between all cases.\nTwo lines of work, curriculum learning and label smoothing, offer alternative strategies to improve learning in deep networks. Curriculum learning, inspired by strategies used for humans (Skinner (1958); Avrahami et al. (1997)), works by gradually increasing the conceptual difficulty of samples used to train deep networks (Bengio et al. (2009); Florensa et al. (2017); Graves et al. (2017)). This has been shown to improve performance on corrupted (Jiang et al. (2017)) and small datasets (Fan et al. (2018)). More recently, deep networks have been used to categorize samples (Weinshall et al. (2018)) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth (Hacohen & Weinshall (2019)). To the best of our knowledge, previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order. This introduces computational overheads e.g. pre-computing the relative difficulty of samples, and also reduces the effective amount of data from which a model can\nlearn in early epochs. Further, curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks.\nA complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima. In this regard, label smoothing offers an important solution that is invariant to the underlying architecture. Early works like Xie et al. (2016) replace ground-truth labels with noise while Reed et al. (2014) uses other models’ outputs to prevent over-fitting. This idea was extended in Bagherinezhad et al. (2018) to an iterative method which uses logits obtained from previously trained versions of the same deep network. While Miyato et al. (2015) use local distributional smoothness, based on the robustness of a model’s distribution around a data point, to regularize outcomes, Pereyra et al. (2017) penalized highly confident outputs directly. Closest in spirit to our work is the label smoothing method defined in Szegedy et al. (2016), which offers an alternative target distribution for all training samples with no extra data augmentation. In general, label smoothing is applied to all examples regardless of how it affects the network’s understanding of them. Further, in methods which use other models to provide logits/labels, often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset, both of which introduce additional computation.\nIn this work, we propose LILAC, Learning with Incremental Labels and Adaptive Compensation, which emphasizes a label-based curriculum and adaptive compensation, to improve upon previous methods and obtain highly accurate and stable solutions. LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples. It works in two key phases, 1) incremental label introduction and 2) adaptive compensation.\nIn the first phase, we incrementally introduce groups of labels in the training process. Data, corresponding to labels not yet introduced to the model, use a single fake label selected from within the dataset. Once a network has been trained for a fixed number of epochs with this setup, an additional set of ground-truth labels is introduced to the network and the training process continues. In recursively revealing labels, LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples.\nOnce all ground-truth labels are revealed the adaptive compensation phase of training is initiated. This phase mirrors conventional batch learning, except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution. Thus, we avoid adjusting labels across the entire dataset, like previous methods, while elevating the stability and average performance of the model. Further, instead of being pre-computed by an alternative model, these softer distributions are generated on-the-fly from the outputs of the model being trained. We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines.\nWhile incremental and continual learning work on evolving data distributions with the addition of memory constraints ((Rebuffi et al., 2017; Castro et al., 2018) and derivative works), knowledge distillation ((Schwarz et al., 2018; Rolnick et al., 2018) and similar works) or other requirements, this work is a departure into using negative mining and focused training to improve learning on a fully available dataset. In incremental/continual learning works, often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset, distinguished by Seen and Unseen labels. Thus, it avoids data deficient learning. Further, works like Bucher et al. (2016); Li et al. (2013); Wang & Gupta (2015) emphasize the importance of hard negative mining, both in size and diversity, in improving learning. Although the original formulation of negative mining was based on imbalanced data, recent object detection works have highlighted its importance in contrasting and improving learning in neural networks.\nTo summarize, our main contributions in LILAC are as follows,\n• we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples, • our method adaptively compensates incorrectly labelled samples by softening their target\ndistribution which improves performance and removes external computational overheads, • we improve average recognition accuracy and decrease the standard deviation of perfor-\nmance across several image classification benchmarks compared to batch learning, a property not shared by other curriculum learning and label smoothing methods." }, { "heading": "2 LILAC", "text": "In LILAC, our main focus is to induce better learning in deep networks. Instead of the conventional curriculum learning approach of ranking samples, we consider all samples equally beneficial. Early on, we focus on learning labels in fixed increments (Section 2.1). Once the network has had a chance to learn all the labels, we shift to regularizing the network to prevent over-fitting by providing a softer distribution as the target vector for previously misclassified samples (Section 2.2). An overview of the entire algorithm discussed is available in the appendix as Algorithm 1." }, { "heading": "2.1 INCREMENTAL LABEL INTRODUCTION PHASE", "text": "In the incremental phase, we initially replace the ground-truth labels of several class using a constant held-out label. Gradually, over the course of several fixed intervals of training we reveal the true label. Within a fixed interval of training, we keep constant two sets of data, ”Seen”, whose groundtruth labels are known and ”Unseen”, whose labels are replaced by a fake value. When training,\nFull Dataset  ( Virtual Data Partition in LILAC (\n \n: Seen\n \n: Unseen\nIncremental Step 1 Incremental Step 2\n(b) Evolution of  data partition over each incremental step,\n(a) Data setup in LILAC\nUses Ground-truth labels \nUses a fake label\nIncremental Step 3 (Final Incremental Step)\n= 4 )  = 1 )\n( )\n( ) −\n= 1\nmini-batches are uniformly sampled from the entire training set, but the instances from ”Unseen” classes use the held-out label. By the end of the final interval, we reveal all ground-truth labels.\nWe now describe the incremental phase in more detail. At the beginning of the incremental label introduction phase, we virtually partition data into two mutually exclusive sets, S : Seen and U : Unseen, as shown in Fig. 1. Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label, which is held constant throughout the entire training process. LILAC assumes a random ordering of labels, Or(M), where M denotes the total number of labels in the dataset. Within this ordering, the number of labels and corresponding data initially placed in S is defined by the variable b. The remaining labels, M − b, are initially placed in U and incrementally revealed in intervals of m labels, a hyper-parameter defined by the user.\nTraining in the incremental phase happens at fixed intervals of E epochs each. Within a fixed interval, the virtual data partition is held constant. Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch, labels are obtained based on their placement in S or U. Then the number of samples from U is reduced or augmented, using a uniform prior, to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label. Finally, the curated mini-batches of data are used to train the neural network. At the end of each fixed interval, we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval." }, { "heading": "2.2 ADAPTIVE COMPENSATION", "text": "Once all the ground-truth labels are available to the deep network, we begin the adaptive compensation phase of training. The main idea behind adaptive compensation is, if the network is unable to correctly predict a sample’s label even after allowing sufficient training time, then we alter the target vector to a less peaked distribution. Compared to learning one-hot vectors, this softer distribution can more easily be learned by the network. Unlike prior methods we adaptively modify the target vector only for incorrectly classified samples on-the-fly.\nIn this phase, the network is trained for a small number of epochs using standard batch learning. Let T be the total number of training epochs in the incremental phase and batch learning. During the adaptive compensation phase, we start at epoch e, where e > T . For a mini-batch of samples in epoch e, predictions from the model at e− 1 are used to determine the final target vector used in the objective function; specifically, we soften the target vector for an instance iff it was misclassified by the model at the end of epoch e − 1. The final target vector for the ith instance at epoch e, te,i, is computed based on the model φe−1 using Equation 1.\nte,i =\n{ ( M−1M−1 )δyi + ( 1− M−1 )1, argmax(φe−1(xi)) 6= y i\nδyi , otherwise . (1)\nHere, (xi, yi) denote a training sample and its corresponding ground-truth label for sample index i while δyi represents the corresponding one-hot vector. 1 is a vector ofM dimensions with all entries as 1 and is a scaling hyper-parameter." }, { "heading": "3 EXPERIMENTS", "text": "Datasets We use three datasets, CIFAR-10, CIFAR-100 (Krizhevsky et al. (2009)) and STL10 (Coates et al. (2011)), to evaluate our method and validate our claims. CIFAR-10 and CIFAR-100 are 10 and 100 class variants of the popular image benchmark CIFAR. Each of these contains 50,000 images in the training set and 10,000 images in the testing set. STL-10 is a 10 class subset of ImageNet with 500 and 800 samples per class for training and testing subsets, respectively.\nMetrics The common metric used to evaluate the performance of all the learning algorithms is average recognition accuracy(%) and standard deviation across 5 trials. We also report consistency, which is a binary metric that indicates whether the training strategy results in higher average performance and lower standard deviation compared to standard batch learning across all datasets.\nExperimental Setup For CIFAR-10/100, we use ResNet18 (He et al. (2016)) as the architectural backbone for all methods; for STL-10, we use ResNet34. In each interval of LILAC’s incremental phase, we train the model for 10 epochs each for CIFAR-10/100, and 5 epochs each for STL-10. During these incremental steps, we use a learning rate of 0.1, 0.01 and 0.1 for CIFAR-10, CIFAR100, and STL-10 respectively. The standard batch learning settings used across all datasets are listed in the appendix. These settings reflect the setup used in LILAC once the incremental portion of training is complete and the algorithm moves into the adaptive compensation phase. Within this phase epochs 175, 525 and 120 are used as thresholds (epoch T ) for CIFAR-10, 100 and STL-10 respectively.\nBaselines\n• Stochastic gradient descent with mini-batches is the baseline against which all methods are compared.\n• Curriculum learning (Bengio et al., 2009) forms a family of related works which aim to help models learn faster and optimize to a better minimum. Following the methodology proposed in this work we artificially create a subset of the dataset called “Simple”, by selecting data that is within a value of 1.1 as predicted by a linear one-vs-all SVR model trained to regress to the ground-truth label. The deep network is trained on the “Simple” dataset for a fixed period of time that mirrors the total number of epochs of the incremental phase of LILAC after which the entire dataset is used to train the network.\n• Label Smoothing (Szegedy et al., 2016) is the closest relevant work to use label smoothing as a form of regularization without extra data augmentation. This non-invasive baseline is used as a measure of the importance of regularization and for its ability to boost performance.\n• Dynamic Batch Size (DBS) is a custom baseline used to highlight the importance of variable batch size in training neural networks. DBS randomly copies data available within a mini-batch to mimic variable batch size. Further, all ground-truth labels are available to this model throughout the training process.\n• Random Augmentation (RA) is a custom baseline used to highlight the importance of virtual data partitions in LILAC. Its implementation closely follows LILAC but excludes the adaptive compensation phase. The main difference between LILAC and RA is that RA uses data from a one randomly chosen class, in U, within a mini-batch while data from all classes in U is used in LILAC to equalize the number of samples from S and U." }, { "heading": "3.1 STANDARDIZED COMPARISON RESULTS", "text": "Table 1 clearly illustrates improvements in average recognition accuracy, decrease in standard deviation and consistency when compared to batch learning. While certain setups have the highest\nperformance on specific datasets (e.g., Label Smoothing on CIFAR-10/100), they are not consistent across all datasets and do not find more stable solutions than LILAC (std. of 0.216 compared to 0.127 from LILAC) LILAC is able to achieve superior performance without unnecessary overheads such as computing sample difficulty or irreversibly altering the ground-truth distribution across all samples.\nA key takeaway from DBS is the relative drop in standard deviation combined with higher average performance when compared to baselines like fixed curriculum and label smoothing. RA serves to highlight the importance of harvesting data from all classes in U simultaneously, for “negative” samples. The variety of data to learn from provides a boost in performance and standard deviation across the board in LILAC w/o AC as opposed to RA. DBS and RA underline the importance of variable batch size and data partitioning in the makeup of LILAC.\nWe further extend LILAC to train the base pyramidnet with shake-drop regularization (p = 1.0) (Yamada et al. (2018)). From Table 2 we clearly see that LILAC can be extended to provide the highest performance on CIFAR-10 given a standard preprocessing setup. To provide a fair comparison we highlight top performing methods with standard preprocessing setups that avoid partial inputs (at the node or image level) since LILAC was developed with fully available inputs in mind. Across all these learning schemes, LILAC is the only one to consistently increase classification accuracy and decrease the standard deviation across all datasets compared to batch learning." }, { "heading": "3.2 ABLATION: BREAKDOWN OF LILAC’S PHASES", "text": "Fig. 2 illustrates the evolution of the embedding across the span of the incremental phase. This space has more degrees of separation when compared to an equivalent epoch of training with batch learning where all the labels are available. Table 3 provides a breakdown of the contribution of each phase of LILAC and how they combine to elevate the final performance. Here, in LILAC w/o AC we replace the entire AC phase with simple batch learning while in Batch + AC we include adaptive compensation with adjusted thresholds. The first half of this table compares the impact of incre-\nmentally introducing labels to a deep network against standard batch learning. We clearly observe that performances across Rows 1 and 2 fall within the indicated standard deviation of each other. However, from Fig. 2 we know that LILAC start from a qualitatively better solution. Combining these results, we conclude that the emphasis on a lengthy secondary batch learning phase erodes overall performance.\nThe second half of Table 3 shows the impact of adding adaptive compensation on batch learning and LILAC. When added to standard batch learning there isn’t a clear and conclusive indicator of improvement across all benchmarks in both average performance and standard deviation. However, in combination with the incremental label introduction phase of LILAC, adaptive compensation improves average performance as well as decreases standard deviation, indicating an improved stability and consistency. Given the similarity in learning between the batch setup and LILAC, when all labels have been introduced, we show that the embedding space learned by incrementally introducing labels (Fig. 2) is distinct from standard batch learning and is more amenable to AC." }, { "heading": "3.3 PROPERTIES OF LILAC", "text": "Through previous experiments we have established the general applicability of LILAC while contrasting its contributions to that of standard batch learning. In this section we dive deeper to reveal some characteristics of LILAC that further supplement the claim of general applicability. Specifically, we characterize the impact of label ordering, smoothness of alternate target vector distribution and injection of larger groups of labels in the incremental phase.\nOrdering of Labels Throughout the standard experiments, we assume labels are used in the ascending order of value. When this is modified to a random ordering or in ascending order of diffi-\nculty, results from Table 4 suggest that there is no explicit benefit or pattern. Other than the extra impact of continually fluctuating label orders across trials, there isn’t a large gap in performance. Thus, we claim LILAC is relatively invariant to the order of label introduction.\nSmoothness of Target Vector in Adaptive Compensation During adaptive compensation, = 0.5 is used in the alternate target vector for samples with failed predictions throughout all experiments in Sections 3.1 and 3.2. When extended to a variety of values, we observe that most variations of the peak performance still fall within the standard deviation range for each dataset. However, the peak average performance values usually occur between 0.7 to 0.5.\nInjection of Label Groups While LILAC was designed to allow the introduction of multiple labels in a single incremental step, throughout the experiments in Sections 3.1 and 3.2 only 1 label was introduced per step to allow thorough learning while eliminating the chance of conflicting decision boundaries from multiple labels. Revealing multiple labels instead of 1 label per incremental step has a negative impact on the overall performance of the model. Table 4 shows that adding large groups of labels force lower performance, which is in line with our hypothesis that revealing fewer labels per incremental step makes the embedding space more amenable to adaptive compensation." }, { "heading": "4 CONCLUSION", "text": "In this work, we proposed LILAC which rethinks curriculum learning based on incrementally learning labels instead of samples. This approach helps kick-start the learning process from a substantially better starting point while making the learned embedding space amenable to adaptive negative logit compensation. Both these techniques combine well in LILAC to show the highest performance on CIFAR-10 for simple data augmentations while easily outperforming batch and curriculum learning and label smoothing on comparable network architectures. The next step in unlocking the full potential of this setup is to extend this setup to include a confidence measure on the predictions of network so that it can handle the effects of dropout or partial inputs. In further expanding LILAC’s ability to handle partial inputs, we aim to explore its effect on standard incremental learning (memory constrained) while also extending it applicability to more complex neural network architectures." }, { "heading": "A LILAC: ALGORITHM", "text": "Algorithm 1: Training strategy inspired by incremental learning initialization; Input: (X,Y ) where Y ∈ c1, c2, .., cM ; M = Total number of labels in the dataset; m = Number of labels to introduce in 1 incremental step; n = Total number of samples; b = Starting incremental batch; C = {c1, c2, .., cM}; for inc batch= b to (Mm ) do\nfor fixed epochs e do C̃ = {c1, .., c(inc batch×Mm +m)}; nC̃ = number of samples with labels in C̃; S = {(xi, yi)|yi ∈ C̃} nC̃ i=1;\nU = {(xj , yj)|yj ∈ C/C̃} n−nC̃ j=1 ; F ′ ⊆ U : F ′ = {(xj , yj)|yj ∈ C/C̃} nC̃ j=1 selected at random; s.t. |S| = |F ′| ; if inc batch = Mm and e ≥ δ then\nF ′ = update target vectors using Eqn. 1 ; end Train Model with data (S ∪ F ′)\nend end" }, { "heading": "B HYPER-PARAMETER SETUPS", "text": "" }, { "heading": "C APPLICABILITY TO VIDEOS", "text": "" }, { "heading": "D PROPERTY: VARIATION OF FIXED INTERVAL SIZE IN INCREMENTAL", "text": "From the results in Table 8, we observe that the choice of E is dependent on the dataset. There isn’t an explicit pattern that can be used to select the value of E without trial runs. Further, the available run-time is an important constraint when select E from a range of values since both m and E affect it." } ]
2,019
null
SP:c3a5a5600463b8f590e9a2b10f7984973410b043
[ "The paper proposes an imitation learning algorithm that combines support estimation with adversarial training. The key idea is simple: multiply the reward from Random Expert Distillation (RED) with the reward from Generative Adversarial Imitation Learning (GAIL). The new reward combines the best of both methods. Like the GAIL reward, the new reward encourages exploration and can be estimated from a small number of demonstrations. Like the RED reward, the new reward avoids survival bias and is more stable than the adversarial reward.", "This paper proposes an approach for improving adversarial imitation learning, by combining it with support-estimation-based imitation learning. In particular, the paper explores a combination of GAIL (Ho and Ermon, 2016) and RED (Wang et. al., 2019), where the reward for the policy-gradient is a product of the rewards obtained from them separately. The motivation is that, while AIL methods are sample-efficient (in terms of expert data) and implicitly promote useful exploration, they could be unreliable outside the support of the expert policy. Therefore, augmenting them by constraining the imitator to the support of the expert policy (with a method such as RED) could result in an overall better imitation learning algorithm. " ]
We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that the proposed method mitigates the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Martin Arjovsky", "Léon Bottou" ], "title": "Towards principled methods for training generative adversarial", "venue": "networks. stat,", "year": 2017 }, { "authors": [ "Nir Baram", "Oron Anschel", "Itai Caspi", "Shie Mannor" ], "title": "End-to-end differentiable adversarial imitation learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Ernesto De Vito", "Lorenzo Rosasco", "Alessandro Toigo" ], "title": "A universally consistent spectral estimator for the support of a distribution", "venue": "Appl Comput Harmonic Anal,", "year": 2014 }, { "authors": [ "Chelsea Finn", "Paul Christiano", "Pieter Abbeel", "Sergey Levine" ], "title": "A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models", "venue": "arXiv preprint arXiv:1611.03852,", "year": 2016 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adversarial inverse reinforcement learning", "venue": "arXiv preprint arXiv:1710.11248,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kee-Eung Kim", "Hyun Soo Park" ], "title": "Imitation learning via kernel mean embedding", "venue": null, "year": 2018 }, { "authors": [ "Ilya Kostrikov", "Kumar Krishna Agrawal", "Debidatta Dwibedi", "Sergey Levine", "Jonathan Tompson" ], "title": "Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning", "venue": "International Conference on Learning Representation,", "year": 2019 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Proceedings of the Seventeenth International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Stéphane Ross", "Drew Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Alessandro Rudi", "Ernesto De Vito", "Alessandro Verri", "Francesca Odone" ], "title": "Regularized kernel algorithms for support estimation", "venue": "Frontiers in Applied Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Fumihiro Sasaki", "Tetsuya Yohira", "Atsuo Kawaguchi" ], "title": "Sample efficient imitation learning for continuous control", "venue": "International Conference on Learning Representation,", "year": 2019 }, { "authors": [ "Yannick Schroecker", "Mel Vecerik", "Jonathan Scholz" ], "title": "Generative predecessor models for sampleefficient imitation learning", "venue": "International Conference On Learning Representations,", "year": 2019 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Shashank Singh", "Ananya Uppal", "Boyue Li", "Chun-Liang Li", "Manzil Zaheer", "Barnabas Poczos" ], "title": "Nonparametric density estimation under adversarial losses", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wen Sun", "Arun Venkatraman", "Geoffrey J Gordon", "Byron Boots", "J Andrew Bagnell" ], "title": "Deeply aggrevated: Differentiable imitation learning for sequential prediction", "venue": "arXiv preprint arXiv:1703.01030,", "year": 2017 }, { "authors": [ "Ruohan Wang", "Carlo Ciliberto", "Pierluigi Amadori", "Yiannis Demiris" ], "title": "Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation", "venue": null, "year": 1905 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In AAAI,", "year": 2008 }, { "authors": [ "Burda" ], "title": "2018) for support estimation. We use the default networks from RED4", "venue": "We set σ following the heuristic in Wang et al", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The class of Adversarial Imitation Learning (AIL) algorithms learns robust policies that imitate an expert’s actions from a small number of expert trajectories, without further access to the expert or environment signals. AIL iterates between refining a reward via adversarial training, and reinforcement learning (RL) with the learned adversarial reward. For instance, Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) shows the equivalence between some settings of inverse reinforcement learning and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), and recasts imitation learning as distribution matching between the expert and the RL agent. Similarly, Adversarial Inverse Reinforcement Learning (AIRL) (Fu et al., 2017) modifies the GAIL discriminator to learn a reward function robust to changes in dynamics or environment properties.\nAIL mitigates the issue of distributional drift from behavioral cloning (Ross et al., 2011), a classical imitation learning algorithm, and demonstrates good performance with only a small number of expert demonstrations. However, AIL has several important challenges, including implicit reward bias (Kostrikov et al., 2019), potential training instability (Salimans et al., 2016; Brock et al., 2018), and potential sample inefficiency with respect to environment interaction (Sasaki et al., 2019). In this paper, we propose a principled approach towards addressing these issues.\nWang et al. (2019) demonstrated that imitation learning is also feasible by constructing a fixed reward function via estimating the support of the expert policy. Since support estimation only requires expert demonstrations, the method sidesteps the training instability associated with adversarial training. However, we show in Section 4.2 that the reward learned via support estimation deteriorates when expert data is sparse, and leads to poor policy performances.\nSupport estimation and adversarial reward represent two different yet complementary RL signals for imitation learning, both learnable from expert demonstrations. We unify both signals into Supportguided Adversarial Imitation Learning (SAIL), a generic imitation learning framework. SAIL leverages the adversarial reward to guide policy exploration and constrains the policy search to the estimated support of the expert policy. It is compatible with existing AIL algorithms, such as GAIL and AIRL. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that SAIL mitigates the implicit reward bias and achieves better performance and training stability against baseline methods over a series of benchmark control tasks." }, { "heading": "2 BACKGROUND", "text": "We briefly review the Markov Decision Process (MDP), the context of our imitation learning task, followed by related works on imitation learning.\nMarkov Decision Process We consider an infinite-horizon discounted MDP (S,A, P, r, p0, γ), where S is the set of states, A the set of actions, P : S × A × S → [0, 1] the transition probability, r : S × A → R the reward function, p0 : S → [0, 1] the distribution over initial states, and γ ∈ (0, 1) the discount factor. Let π be a stochastic policy π : S × A → [0, 1] with expected discounted reward Eπ(r(s, a)) , E( ∑∞ t=0 γ\ntr(st, at)) where s0 ∼ p0, at ∼ π(·|st), and st+1 ∼ P (·|st, at) for t ≥ 0. We denote πE the expert policy. Behavioral Cloning (BC) learns a policy π : S → A directly from expert trajectories via supervised learning. BC is simple to implement, and effective when expert data is abundant. However, BC is prone to distributional drift: the state distribution of expert demonstrations deviates from that of the agent policy, due to accumulation of small mistakes during policy execution. Distributional drift may lead to catastrophic errors (Ross et al., 2011). While several methods address the issue (Ross & Bagnell, 2010; Sun et al., 2017), they often assume further access to the expert during training.\nInverse Reinforcement Learning (IRL) first estimates a reward from expert demonstrations, followed by RL using the estimated reward (Ng & Russell, 2000; Abbeel & Ng, 2004). Building upon a maximum entropy formulation of IRL (Ziebart et al., 2008), Finn et al. (2016) and Fu et al. (2017) explore adversarial IRL and its connection to Generative Adversarial Imitation Learning (Ho & Ermon, 2016).\nImitation Learning via Distribution Matching Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) frames imitation learning as distribution matching between the expert and the RL agent. The authors show the connection between IRL and GANs. Specifically, GAIL imitates the expert by formulating a minimax game:\nmin π max D∈(0,1) Eπ(logD(s, a)) + EπE (log(1−D(s, a))), (1)\nwhere the expectations Eπ and EπE denote the joint distributions over state-actions of the RL agent and the expert, respectively. GAIL is able to achieve expert performance with a small number of expert trajectories on various benchmark tasks. However, GAIL is relatively sample inefficient with respect to environment interaction, and inherits issues associated with adversarial learning, such as vanishing gradients, training instability and overfitting to expert demonstrations (Arjovsky & Bottou, 2017; Brock et al., 2018).\nRecent works have improved the sample efficiency and stability of GAIL. For instance, Generative Moment Matching Imitation Learning (Kim & Park, 2018) replaces the adversarial reward with a non-parametric maximum mean discrepancy estimator to sidestep adversarial learning. Baram et al. (2017) improve sample efficiency with a model-based RL algorithm. Kostrikov et al. (2019) and Sasaki et al. (2019) demonstrate significant gain in sample efficiency with off-policy RL algorithms. In addition, Generative Predecessor Models for Imitation Learning (Schroecker et al., 2019) imitates the expert policy using generative models to reason about alternative histories of demonstrated states.\nOur proposed method is closely related to the broad family of AIL algorithms including GAIL and adversarial IRL. It is also complementary to many techniques for improving the algorithmic efficiency and stability, as discussed above. In particular, we focus on improving the quality of the learned reward by constraining adversarial reward to the estimated support of the expert policy.\nImitation Learning via Support Estimation Alternative to AIL, Wang et al. (2019) demonstrate the feasibility of using a fixed RL reward via estimating the support of the expert policy from expert demonstrations. Connecting kernel-based support estimation (De Vito et al., 2014) to Random Network Distillation (Burda et al., 2018), the authors propose Random Expert Distillation (RED) to learn a reward function based on support estimation. Specifically, RED learns the reward parameter θ̂ by minimizing:\nmin θ̂\nEs,a∼πE ||fθ̂(s, a)− fθ(s, a)|| 2 2, (2)\nwhere fθ : S ×A→ RK projects (s, a) from expert demonstrations to some embedding of size K, with randomly initialized θ. The reward is then defined as:\nrred(s, a) = exp(−σ||fθ̂(s, a)− fθ(s, a)|| 2 2), (3)\nwhere σ is a hyperparameter. As optimizing Eq. (2) only requires expert data, RED sidesteps adversarial learning, and casts imitation learning as a standard RL task using the learned reward. While RED works well given sufficient expert data, we show in the experiments that its performance suffers in the more challenging setting of sparse expert data." }, { "heading": "3 METHOD", "text": "Formally, we consider the task of learning a reward function r̂(s, a) from a finite set of trajectories {τi}Ni=1, sampled from the expert policy πE within a MDP. Each trajectory is a sequence of stateaction tuples in the form of τi = {s1, a1, s2, a2, ..., sT , aT }. Assuming that the expert trajectories are consistent with some latent reward function r∗(s, a), we aim to learn a policy that achieves good performance with respect to r∗(s, a) by applying RL on the learned reward function r̂(s, a).\nIn this section, we first discuss the advantages and shortcomings of AIL to motivate our method. We then introduce Support-guided Adversarial Learning (SAIL), and present a theoretical analysis that compares SAIL with the existing methods, specifically GAIL." }, { "heading": "3.1 ADVERSARIAL IMITATION LEARNING", "text": "A clear advantage of AIL resides in its low sample complexity with respect to expert data. For instance, GAIL requires as little as 200 state-action tuples from the expert to achieve imitation. The reason is that the adversarial reward may be interpreted as an effective exploration mechanism for the RL agent. To see this, consider the learned reward function under the optimality assumption. With the optimal discriminator to Eq. (1) D∗(s, a) = pπ(s,a)pπE (s,a)+pπ(s,a) , a common reward for GAIL is\nrgail(s, a) = − log(D∗(s, a)) = log ( 1 + pπE (s, a)\npπ(s, a)\n) = log(1 + φ(s, a)). (4)\nEq. (4) shows that the adversarial reward only depends on the ratio φ(s, a) = pπE (s,a)pπ(s,a) . Intuitively, rgail incentivizes the RL agent towards under-visited state-actions, where φ(s, a) > 1, and away from over-visited state-actions, where φ(s, a) < 1. When πE and π match exactly, rgail converges to an indicator function for the support of πE , since φ(s, a) = 1 ∀ (s, a) ∈ supp(πE) (Goodfellow et al., 2014). In practice, the adversarial reward is unlikely to converge, as pπE is estimated from a finite set of expert demonstrations. Instead, the adversarial reward continuously drives the agent to explore by evolving the reward landscape.\nHowever, AIL also presents several challenges. Kostrikov et al. (2019) demonstrated that the reward − logD(s, a) suffers from an implicit survival bias, as the non-negative reward may lead to suboptimal behaviors in goal-oriented tasks where the agent learns to move around the goal to accumulate rewards, instead of completing the tasks. While the authors resolve the issue by introducing absorbing states, the solution assumes extra RL signals from the environment, including access to the time limit of an environment to detect early termination of training episodes. In Section 4.1, we empirically demonstrate the survival bias on Lunar Lander, a common RL benchmark, by showing that agents trained with GAIL often hover over the goal location1. We also show that our proposed method is able to robustly imitate the expert.\nAnother challenge with AIL is potential training instability. Wang et al. (2019) demonstrated empirically that the adversarial reward could be unreliable in regions where the expert data is sparse, causing the agent to diverge from the intended behavior. When the agent policy is substantially different from the expert policy, the discriminator could differentiate them with high confidence, resulting in very low rewards and significant slow down in training, similar to the vanishing gradient problem in GAN training (Arjovsky & Bottou, 2017)." }, { "heading": "3.2 SUPPORT-GUIDED ADVERSARIAL IMITATION LEARNING", "text": "We propose a novel reward function by combining the standard adversarial reward rgail with the corresponding support guidance rred.\nrsail(s, a) = rred(s, a) · rgail(s, a). (5)\nSAIL is designed to leverage the exploration mechanism offered by the adversarial reward, and to constrain the agent to the estimated support of the expert policy. Despite being a simple modification, support guidance provides strong reward shaping to address the challenges discussed in the previous\n1The agents still learn to land for some initial conditions.\nAlgorithm 1 SUPPORT-GUIDED ADVERSARIAL IMITATION LEARNING 1: Input: Expert trajectories τE = {(si, ai)}Ni=1, Θ function models, initial policy πω0 , initial\ndiscriminator parameters w0, learning rate lD.\n2: rred = RED(Θ, τE) 3: for i = 0, 1, . . . 4: sample a trajectory τi ∼ π 5: wi+1 = wi + lD (Êτi(O logDwi(s, a)) + ÊτE (O log(1−Dwi(s, a)))) 6: rgail : (s, a) 7→ 1−Dwi+1(s, a) 7: πωi+1 = TRPO(rred · rgail, πωi).\n8: def RED(Θ, τ) 9: Sample θ ∈ Θ\n10: θ̂ =MINIMIZE(fθ̂, fθ, τ) 11: return rred : (s, a) 7→ exp(−σ||fθ̂(s, a)− fθ(s, a)|| 2 2)\nsection. As both support guidance and adversarial reward are learnable from expert demonstrations, our method requires no further assumptions that standard AIL.\nSAIL addresses the survival bias in goal-oriented tasks by encouraging the agent to stop at the goal and complete the task. In particular, rred shapes the adversarial reward by favoring stopping at the goal against all other actions, as stopping at the goal is on the support of the expert policy, while other actions are not. We demonstrate empirically that SAIL assigns significantly higher reward towards completing the task and corrects for the bias in Section 4.1. To improve training stability, SAIL constrains the RL agent to the estimated support of the expert policy, where rgail provides a more reliable RL signal (Wang et al., 2019). As rred tends to be very small (ideally zero) for (s, a) 6∈ supp(πE), rsail discourages the agent from exploring those state-actions by masking away the rewards. This is a desirable property as the quality of the RL signals beyond the support of the expert policy can’t be guaranteed. We demonstrate in Section 4.2 the improved training stability on the Mujoco benchmark tasks .\nWe provide the pseudocode implementation of SAIL in Algorithm 1. The algorithm computes rred by estimating the support of the expert policy, followed by iterative updates of the policy and rgail. We apply the Trust Region Policy Optimization (TRPO) algorithm (Schulman et al., 2015) with the reward rsail for policy updates.\nReward Variants In practice, we observe that constraining the range of the adversarial reward generally produces lower-variance policies. Specifically, we transform rgail in Eq. (5) from − logD(s, a) ∈ [0,∞] to 1 − D(s, a) ∈ [0, 1]. For ease of notation, we refer to the bounded variant as SAIL-b, and the unbounded variant as SAIL. Similarly, we denote the bounded GAIL reward as GAIL-b. We include the comparison between the reward variants in the experiments." }, { "heading": "3.3 COMPARING SAIL WITH GAIL", "text": "In this section, we show that SAIL is at least as efficient as GAIL in its sample complexity for expert data, and provide comparable RL signals on the expert policy’s support. We note that our analysis could be similarly applied to other AIL methods, suggesting the broad applicability of our approach.\nWe begin from the asymptotic setting, where the number of expert trajectories tends to infinity. In this case, both GAIL’s, RED’s and SAIL’s discriminators ultimately recover the expert policy’s support at convergence (see Ho & Ermon (2016) for GAIL and Wang et al. (2019) for RED; SAIL follows from their combination). Moreover, for both GAIL and SAIL, the expert and agent policy distributions match exactly at convergence, implying a successful imitation learning. Therefore, it is critical to characterize the rates of convergence of the two methods, namely their relative sample complexity with respect to the number of expert demonstrations.\nFormally, let (s, a) 6∈ supp(πE). Prototypical learning bounds for an estimator of the support r̂ ≥ 0 provide high probability bounds in the form of P(r̂(s, a) ≤ c log(1/δ)n−α) > 1 − δ for\nany confidence δ ∈ (0, 1], with c a constant not depending on δ or the number n of samples (i.e., expert state-actions). Here, α > 0 represents the learning rate, namely how fast the estimator is converging to the support. By choosing the reward in Eq. (5), we are leveraging the faster learning rates between αred and αgail, with respect to support estimation. At the time being, no results are available to characterize the sample complexity of GAIL (loosely speaking, the α and c introduced above). Therefore, we proceed by focusing on a relative comparison with SAIL. In particular, we show the following (see appendix for a proof). Proposition 1. Assume that for any (s, a) 6∈ supp(πE) the rewards for RED and GAIL have the following learning rates in estimating the support\nP ( rred(s, a) > cred log 1 δ\nnαred\n) ≤ δ P ( rgail(s, a) > cgail log 1 δ\nnαgail\n) ≤ δ. (6)\nThen, for any δ ∈ (0, 1] and any (s, a) 6∈ supp(πE), the following holds rsail(s, a) ≤ min ( credRgail nαred , cgailRred nαgail ) log 1 δ , (7)\nwith probability at least 1 − δ, where Rred and Rgail are the upper bounds for rred and rgail, respectively.\nEq. (7) shows that SAIL is at least as fast as the faster among RED and GAIL with respect to support estimation, implying that SAIL is at least as efficient as GAIL in the sample complexity for expert data. Eq. (7) also indicates the quality of the learned reward, as state-actions outside the expert’s support should be assigned minimum reward. Proposition 2. For any (s, a) ∈ supp(πE) and any δ ∈ (0, 1], we assume that\nP ( |rred(s, a)− 1| > cred log 1 δ\nnαred\n) < δ. (8)\nThe following event holds with probability at least 1− δ that\n|rsail(s, a)− rgail(s, a)| ≤ credRgail nαred log 1 δ . (9)\nEq. (9) shows that on the expert policy’s support, rsail is close to rgail up to a precision that improves with the number of expert state-actions. SAIL thus provides RL signals comparable to GAIL on the expert policy’s support.\nIt is also worth noting that the analysis could explain why rred + rgail is a less viable approach for combining the two RL signals. The analogous bound to Eq. (7) would be the sum of errors from the two methods, implying the slower of the two learning rates, while Eq. (9) would improve only by a constant, as Rgail would be absent from Eq. (9). Our preliminary experiments indicated that rred + rgail performed noticeably worse than Eq. (5).\nLastly, we comment on whether the assumptions in Eqs. (6) and (8) are satisfied in practice. Following the kernel-based version of RED (Wang et al., 2019), we can borrow previous results from the set learning literature, which guarantee RED to have a rate of αred = 1/2 (De Vito et al., 2014; Rudi et al., 2017). These rates have been shown to be optimal. Any estimator of the support cannot have faster rates than n−1/2, unless additional assumptions are imposed. Learning rates for distribution matching with GANs are still an active area of research, and conclusive results characterizing the convergence rates of these estimators are not available. We refer to Singh et al. (2018) for an in-depth analysis of the topic." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the proposed method against BC, GAIL and RED on Lunar Lander and six Mujoco control tasks including Hopper, Reacher, HalfCheetah, Walker2d, Ant, and Humanoid. We omit evaluation against methods using off-policy RL algorithms, as they are not the focus of this work. We also note that support guidance is complementary to such methods.\nDefault No-terminal\nBC 100.38 ± 130.91 100.38 ± 130.91 RED 13.75 ± 53.43 -39.33 ± 24.39 GAIL 258.30 ± 28.98 169.73 ± 80.84 GAIL-b 250.53 ± 67.07 -69.33 ± 79.76 SAIL 257.02 ± 20.66 237.96 ± 49.70 SAIL-b 262.97 ± 18.11 256.83 ± 20.99 Expert 253.58 ± 31.27 253.58 ± 31.27\nTable 1: Average environment reward and standard deviation on Lunar Lander, evaluated over 50 runs for the default and no-terminal environment." }, { "heading": "4.1 LUNAR LANDER", "text": "We demonstrate that SAIL variants mitigate the survival bias in Lunar Lander (Fig. 1) from OpenAI Gym (Brockman et al., 2016), while other baseline methods imitate the expert inconsistently. In this task, the agent is required to control a spacecraft to safely land between the flags. A human expert provided 10 demonstrations for this task as an imitation target.\nWe observe that even without the environment reward, Lunar Lander provides a natural RL signal by terminating episodes early when crashes are detected, thus encouraging the agent to avoid crashing. Consequently, all methods are able to successfully imitate the expert and land the spacecraft appropriately. SAIL variants perform slightly better than GAIL variants on the average reward, and achieve noticeably lower standard deviation. The average performances and the standard deviations evaluated over 50 runs are presented in Table 1.\nTo construct a more challenging task, we disable all early termination feature of the environment, thus removing the environment RL signals. In this no-terminal environment, a training episode only ends after the time limit. We present each algorithm’s performance for the no-terminal setting in Table 1. SAIL variants outperform GAIL variants. Specifically, we observe that GAIL learns to land for some initial conditions, while exhibit survival bias in other scenarios by hovering at the goal. In contrast, SAIL variants are still able to recover the expert policy.2\nTo visualize the shaping effect from support guidance, we plot the average learned reward for GAIL, SAIL-b and RED at goal states. The goal states are selected from the expert trajectories and satisfy two conditions: 1) touching the ground (the state vector has indicator variables for ground contact), and 2) has \"no op\" as the corresponding action. As the adversarial reward functions are dynamic, we snapshot the learned rewards when the algorithms obtain their best policies, respectively. Fig. 3 shows the average rewards for each available action, averaged across all the goal states. Compared against the other algorithms, SAIL-b assigns a significantly higher reward to \"no op\", which facilitates the agent learning. Though GAIL and RED still favor \"no op\" to other actions, the differences in reward are much smaller, causing less consistent landing behaviors.\nWe further observe that all evaluated AIL methods oscillate between partially hovering behavior and landing behavior during policy learning. The observation suggests that our method only partially addresses the survival bias, a limitation we will tackle in future works. This is likely caused by SAIL’s non-negative reward, despite the beneficial shaping effect from support estimation. For additional experiment results and discussion on Lunar Lander, please refer to the appendix." }, { "heading": "4.2 MUJOCO TASKS", "text": "Mujoco control tasks have been commonly used as the standard benchmark for AIL. We evaluate SAIL against GAIL, RED and BC on Hopper, Reacher, HalfCheetah, Walker2d, Ant and Humanoid. We adopt the same experimental setup presented in Ho & Ermon (2016) by sub-sampling the expert trajectories every 20 samples. Consistent with the observation from Kostrikov et al. (2019), our preliminary experiments show that sub-sampling presents a more challenging setting, as BC is competitive with AIL when full trajectories are used. In our experiments, we also adopt the minimum\n2A illustrative video is available at https://vimeo.com/361835881\nnumber of expert trajectories specified in Ho & Ermon (2016) for each task. More details on experiment setup are available in the appendix.\nWe apply each algorithm using 5 different random seeds in all Mujoco tasks. Table 2 shows the performance comparison between the evaluated algorithms. We report the mean performance and standard deviation for each algorithm over 50 evaluation runs, choosing the best policies obtained for each algorithm out of the 5 random seeds.\nThe results show that SAIL-b is comparable to GAIL on Hopper, and outperform the other methods on all other tasks. We note that RED significantly underperforms in the sub-sampling setting, while Wang et al. (2019) used full trajectories in their experiments. Across all tasks, SAIL-b generally achieves lower standard deviation compared to other algorithms, in particular for Humanoid, indicating the robustness of the learned policies.\nWe stress that standard deviation is also a critical metric, as it indicates the robustness of the learned policies when presented with different states. For instance, the large standard deviations in Humanoid are caused by occasional crashes, which may be highly undesirable depending on the intended applications. To illustrate robustness of the learned policies, we plot the histogram of all 50 evaluations in Humanoid for RED, GAIL-b and SAIL-b in Fig. 2. The figure shows that SAIL-b performs consistently with expert performance. Though GAIL-b appears to be only slightly worse in average performance, the degradation is caused by occasional and highly undesirable crashes, suggesting incomplete imitation of the expert. RED performs the worst in average performance, but is consistent with no failure modes detected. The result suggests that the proposed method combines the advantages of both support guidance and adversarial learning.\nComparing SAIL against SAIL-b, we observe that the bounded variant generally produces policies with smaller standard deviations and better performances, especially for Ant and Humanoid. This\nis likely due to the fact that SAIL-b receives equal contribution from both support guidance and adversarial learning, as rred and rgail have the same range in this formulation. In addition, we note that GAIL fails to imitate the expert in Ant, while GAIL-b performs significantly better. The results suggest that restricting the range of the adversarial reward could improve performance." }, { "heading": "4.3 TRAINING STABILITY AND SAMPLE EFFICIENCY", "text": "To assess the sensitivity with respect to random seeds, we plot the training progress against number of iterations for the evaluated algorithms in Fig. 4, Each iteration consists of 1000 environment steps. The figure reports mean and standard deviation of each algorithm, across the 5 random seeds.\nFig. 4 shows that SAIL-b is more sample efficient and stable in Reacher, Ant and Humanoid tasks; and is comparable to the other algorithms in the remaining tasks. Consistent with our analysis in Section 3.3, SAIL-b appears at least as efficient as GAIL even when the support guidance (i.e., the performance of RED) suffers from insufficient expert data in Hopper, HalfCheetah and Walker2d. In Reacher, Ant and Humanoid, SAIL-b benefits from the support guidance and achieves better performance and training stability. In particular, we note that without support guidance, GAIL fails to imitate the expert in Ant (Fig. 4e). Similar failures were also observed in Kostrikov et al. (2019). GAIL is also more sensitive to initial conditions: in Humanoid, GAIL converged to sub-optimal policies in 2 out 5 seeds. Lastly, while RED improves noticeably faster during early training in Humanoid, it converged to a sub-optimal policy eventually." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose Support-guided Adversarial Imitation Learning by combining support guidance with adversarial imitation learning. Our approach is complementary to existing adversarial imitation learning algorithms, and addresses several challenges associated with them. More broadly, our results show that expert demonstrations contain rich sources of information for imitation learning. Effectively combining different sources of reinforcement learning signals from the expert demonstrations produces more efficient and stable algorithms by constraining the policy search space; and appears to be a promising direction for future research." }, { "heading": "A PROOF FOR PROPOSITION 1 AND 2", "text": "Observe that for any (s, a) ∈ S ×A\nrsail(s, a) = rred(s, a) · rgail(s, a) ≤ min(rred(s, a)Rgail, rgail(s, a)Rred). (10)\nBy the assumption on the learning rate in Eq. (6), one of the two following events holds with probability at least 1− δ, for any (s, a) 6∈ supp(πE) and δ ∈ (0, 1]\nrred(s, a) ≤ cred log\n1 δ\nnαred or rgail(s, a) ≤\ncgail log 1 δ\nnαgail . (11)\nPlugging the above upper bounds into Eq. (10) yields the desired result in Eq. (7).\nBy assumption in Eq. (8) the following event holds with probability at least 1 − δ for (s, a) ∈ supp(πE).\n|rred(s, a)− 1| ≤ cred log\n1 δ\nnαred . (12)\nPlugging this inequality in the definition of rsail, we obtain\n|rsail(s, a)− rgail(s, a)| = |rgail(s, a)(rred(s, a)− 1)| (13) ≤ |rgail(s, a)||rred(s, a)− 1| (14)\n≤ credRgail log\n1 δ\nnαred , (15)" }, { "heading": "B EXPERIMENT DETAILS", "text": "The experiments are based on OpenAI’s baselines3 and the original implementation of RED4. We adapted the code from RED4 for our experiments, and used the accompanying dataset of expert trajectories. 4 Nvidia GTX1070 GPUs were used in the experiments.\nTable 3 shows the environment information, number of environment steps and number of expert trajectories used for each task. Each full trajectory consists of 1000 (s, a) pairs. They are sub-sampled during the experiments.\n3https://github.com/openai/baselines 4https://github.com/RuohanW/RED\nB.1 NETWORK ARCHITECTURE\nThe default policy network from OpenAI’s baselines are used for all tasks: two fully-connected layers of 100 units each, with tanh nonlinearities. The discriminator networks and the value function networks use the same architecture.\nRED and SAIL use RND Burda et al. (2018) for support estimation. We use the default networks from RED4. We set σ following the heuristic in Wang et al. (2019) that (s, a) from the expert trajectories mostly have reward close to 1.\nB.2 HYPERPARAMETERS\nFor fair comparisons, all algorithms shared hyperparameters for each task. We present them in the table below, including discriminator learning rate lD, discount factor γ, number of policy steps per iteration nG, and whether the policy has fixed variance. All other hyperparameters are set to their default values from OpenAI’s baselines." }, { "heading": "C ADDITIONAL RESULTS ON LUNAR LANDER", "text": "In the default environment, Lunar Lander contains several terminal states, including crashing, flying out of view, and landing at the goal. In the no-terminal environment, all terminal states are disabled, such that the agent must solely rely on the expert demonstrations for training signals.\nTo compare our method with the technique of introducing virtual absorbing state (AS) (Kostrikov et al., 2019), we also construct a goal-terminal environment where the only terminal state is successful landing at the goal, because the AS technique cannot be directly applied in the no-terminal environment. We present the results in Appendix C.\nThe results suggest that AS overall improves both the mean performance and standard deviations for both GAIL and SAIL. Specifically, the technique is able to mitigates the survival bias in GAIL significantly. However, SAIL still compares favorably to the technique in the goal-terminal environment. Further, since AS and support guidance is not mutually exclusive, we also combine them and report the performances. The results suggest that support guidance is compatible with AS, and achieves overall the best performance with low standard deviations.\nThe results also suggest that both AS and support guidance partially mitigate the reward bias, but don’t fully solve it. We will further explore this issue in future work." } ]
2,019
null
SP:812c4e2bd2b3e6b25fc6869775bea958498cbfd1
[ "This paper tackles an issue imitation learning approaches face. More specifically, policies learned in this manner can often fail when they encounter new states not seen in demonstrations. The paper proposes a method for learning value functions that are more conservative on unseen states, which encourages the learned policies to stay within the distribution of training states. Theoretical results are derived to provide some support for the approach. A practical algorithm is also presented and experiments on continuous control tasks display the effectiveness of the method, with particularly good results on imitation learning followed by reinforcement learning.", "This work presents the value iteration with negative sampling (VINS) algorithm, a method for accelerating reinforcement learning using expert demonstrations. In addition to learning an expert policy through behavioral cloning, VINS learns an initial value function which is biased to assign smaller expected values to states not encountered during demonstrations. This is done by augmenting the demonstration data with states that have been randomly perturbed, and penalizing the value targets for these states by a factor proportional to their Euclidean distance to the original state. In addition to the policy and value function, VINS also learns a one-step dynamics model used to select actions against the learned value function. As the value function learned in VINS is only defined with respect to the current state, action values are estimated by sampling future states using the learned model, and computing the value of these sampled states." ]
Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservativelyextrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency.
[ { "affiliations": [], "name": "NEGATIVE SAMPLING" }, { "affiliations": [], "name": "Yuping Luo" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Jacopo Aleotti", "Stefano Caselli" ], "title": "Grasp recognition in virtual reality for robot pregrasp planning by demonstration", "venue": "In Proceedings 2006 IEEE International Conference on Robotics and Automation,", "year": 2006 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Brenna D Argall", "Sonia Chernova", "Manuela Veloso", "Brett Browning" ], "title": "A survey of robot learning from demonstration", "venue": "Robotics and autonomous systems,", "year": 2009 }, { "authors": [ "J Andrew Bagnell" ], "title": "An invitation to imitation", "venue": "Technical report, CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST,", "year": 2015 }, { "authors": [ "Michael Bain", "Claude Sommut" ], "title": "A framework for behavioural claning", "venue": "Machine intelligence,", "year": 1999 }, { "authors": [ "Jessica Chemali", "Alessandro Lazaric" ], "title": "Direct policy iteration with demonstrations", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ignasi Clavera", "Jonas Rothfuss", "John Schulman", "Yasuhiro Fujita", "Tamim Asfour", "Pieter Abbeel" ], "title": "Model-based reinforcement learning via meta-policy optimization", "venue": "arXiv preprint arXiv:1809.05214,", "year": 2018 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S Sutton" ], "title": "Off-policy actor-critic", "venue": "arXiv preprint arXiv:1205.4839,", "year": 2012 }, { "authors": [ "Chelsea Finn", "Paul Christiano", "Pieter Abbeel", "Sergey Levine" ], "title": "A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models", "venue": "arXiv preprint arXiv:1611.03852,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Guided cost learning: Deep inverse optimal control via policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adversarial inverse reinforcement learning", "venue": "arXiv preprint arXiv:1710.11248,", "year": 2017 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "arXiv preprint arXiv:1812.02900,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Yang Gao", "Ji Lin", "Fisher Yu", "Sergey Levine", "Trevor Darrell" ], "title": "Reinforcement learning from imperfect demonstrations", "venue": "arXiv preprint arXiv:1802.05313,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E Turner", "Sergey Levine" ], "title": "Q-prop: Sampleefficient policy gradient with an off-policy critic", "venue": "arXiv preprint arXiv:1611.02247,", "year": 2016 }, { "authors": [ "Michael U Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Todd Hester", "Matej Vecerik", "Olivier Pietquin", "Marc Lanctot", "Tom Schaul", "Bilal Piot", "Dan Horgan", "John Quan", "Andrew Sendonaris", "Ian Osband" ], "title": "Deep q-learning from demonstrations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Wonseok Jeon", "Seokin Seo", "Kee-Eung Kim" ], "title": "A bayesian approach to generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke" ], "title": "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "arXiv preprint arXiv:1806.10293,", "year": 2018 }, { "authors": [ "S Mohammad Khansari-Zadeh", "Aude Billard" ], "title": "Learning stable nonlinear dynamical systems with gaussian mixture models", "venue": "IEEE Transactions on Robotics,", "year": 2011 }, { "authors": [ "Beomjoon Kim", "Amir-massoud Farahmand", "Joelle Pineau", "Doina Precup" ], "title": "Learning from limited demonstrations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Kee-Eung Kim", "Hyun Soo Park" ], "title": "Imitation learning via kernel mean embedding", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ilya Kostrikov", "Kumar Krishna Agrawal", "Debidatta Dwibedi", "Sergey Levine", "Jonathan Tompson" ], "title": "Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation", "venue": null, "year": 2018 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Michael Laskey", "Jonathan Lee", "Roy Fox", "Anca Dragan", "Ken Goldberg" ], "title": "Dart: Noise injection for robust imitation learning", "venue": "arXiv preprint arXiv:1703.09327,", "year": 2017 }, { "authors": [ "Martin Lawitzky", "Jose Ramon Medina", "Dongheui Lee", "Sandra Hirche" ], "title": "Feedback motion planning and learning from demonstration in physical robotic assistance: differences and synergies", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Hoang M Le", "Yisong Yue", "Peter Carr", "Patrick Lucey" ], "title": "Coordinated multi-agent imitation learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 1995 }, { "authors": [ "Hoang M Le", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudík", "Yisong Yue", "Hal Daumé III" ], "title": "Hierarchical imitation and reinforcement learning", "venue": "arXiv preprint arXiv:1803.00590,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": null, "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ashvin Nair", "Bob McGrew", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Takayuki Osa", "Amir M Ghalamzan Esfahani", "Rustam Stolkin", "Rudolf Lioutikov", "Jan Peters", "Gerhard Neumann" ], "title": "Guiding trajectory optimization by demonstrated distributions", "venue": "IEEE Robotics and Automation Letters,", "year": 2017 }, { "authors": [ "Razvan Pascanu", "Yujia Li", "Oriol Vinyals", "Nicolas Heess", "Lars Buesing", "Sebastien Racanière", "David Reichert", "Théophane Weber", "Daan Wierstra", "Peter Battaglia" ], "title": "Learning model-based planning from scratch", "venue": "arXiv preprint arXiv:1707.06170,", "year": 2017 }, { "authors": [ "Bilal Piot", "Matthieu Geist", "Olivier Pietquin" ], "title": "Boosted bellman residual minimization handling expert demonstrations", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2014 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder", "Vikash Kumar", "Wojciech Zaremba" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research, 2018", "venue": null, "year": 2018 }, { "authors": [ "Dean A Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations", "venue": "arXiv preprint arXiv:1709.10087,", "year": 2017 }, { "authors": [ "Stéphane Ross", "Drew Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Stephane Ross", "J Andrew Bagnell" ], "title": "Reinforcement and imitation learning via interactive no-regret learning", "venue": "arXiv preprint arXiv:1406.5979,", "year": 2014 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "arXiv preprint arXiv:1806.01242,", "year": 2018 }, { "authors": [ "Fumihiro Sasaki", "Tetsuya Yohira", "Atsuo Kawaguchi" ], "title": "Sample efficient imitation learning for continuous control", "venue": null, "year": 2018 }, { "authors": [ "Stefan Schaal" ], "title": "Learning from demonstration", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Yannick Schroecker", "Charles L Isbell" ], "title": "State aware imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yannick Schroecker", "Mel Vecerik", "Jon Scholz" ], "title": "Generative predecessor models for sample-efficient imitation learning", "venue": null, "year": 2018 }, { "authors": [ "Wen Sun", "Arun Venkatraman", "Geoffrey J Gordon", "Byron Boots", "J Andrew Bagnell" ], "title": "Deeply aggrevated: Differentiable imitation learning for sequential prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Wen Sun", "J Andrew Bagnell", "Byron Boots" ], "title": "Truncated horizon policy search: Combining reinforcement learning & imitation learning", "venue": "arXiv preprint arXiv:1805.11240,", "year": 2018 }, { "authors": [ "Wen Sun", "Geoffrey J Gordon", "Byron Boots", "J Bagnell" ], "title": "Dual policy iteration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Behavioral cloning from observation", "venue": "arXiv preprint arXiv:1805.01954,", "year": 2018 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double q-learning", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Matej Večerík", "Todd Hester", "Jonathan Scholz", "Fumin Wang", "Olivier Pietquin", "Bilal Piot", "Nicolas Heess", "Thomas Rothörl", "Thomas Lampe", "Martin Riedmiller" ], "title": "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards", "venue": "arXiv preprint arXiv:1707.08817,", "year": 2017 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Remi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience", "venue": "replay. arXiv preprint arXiv:1611.01224,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Josh S Merel", "Scott E Reed", "Nando de Freitas", "Gregory Wayne", "Nicolas Heess" ], "title": "Robust imitation of diverse behaviors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Gu Ye", "Ron Alterovitz" ], "title": "guided motion planning", "venue": "In Robotics research,", "year": 2017 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Aaai,", "year": 2008 }, { "authors": [], "title": "2018) We use the implementation from the official implementation https://github.com/google-research/google-research/ tree/master/dac", "venue": "VINS", "year": 2018 } ]
[ { "heading": null, "text": "Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservativelyextrapolated value functions, which provably lead to policies with self-correction. We design an algorithm Value Iteration with Negative Sampling (VINS) that practically learns such value functions with conservative extrapolation. We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks. We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm, which is shown to outperform prior works in sample efficiency." }, { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) algorithms, especially with sparse rewards, often require a large amount of trial-and-errors. Imitation learning from a small number of demonstrations followed by RL finetuning is a promising paradigm to improve the sample efficiency (Rajeswaran et al., 2017; Večerík et al., 2017; Hester et al., 2018; Nair et al., 2018; Gao et al., 2018).\nThe key technical challenge of learning from demonstrations is the covariate shift: the distribution of the states visited by the demonstrations often has a low-dimensional support; however, knowledge learned from this distribution may not necessarily transfer to other distributions of interests. This phenomenon applies to both learning the policy and the value function. The policy learned from behavioral cloning has compounding errors after we execute the policy for multiple steps and reach unseen states (Bagnell, 2015; Ross & Bagnell, 2010). The value function learned from the demonstrations can also extrapolate falsely to unseen states. See Figure 1a for an illustration of the false extrapolation in a toy environment.\nWe develop an algorithm that learns a value function that extrapolates to unseen states more conservatively, as an approach to attack the optimistic extrapolation problem (Fujimoto et al., 2018a). Consider a state s in the demonstration and its nearby state s̃ that is not in the demonstration. The key intuition is that s̃ should have a lower value than s, because otherwise s̃ likely should have been visited by the demonstrations in the first place. If a value function has this property for most of the pair (s, s̃) of this type, the corresponding policy will tend to correct its errors by driving back to the demonstration states because the demonstration states have locally higher values. We formalize the intuition in Section 4 by defining the so-called conservatively-extrapolated value function, which is guaranteed to induce a policy that stays close to the demonstrations states (Theorem 4.4).\nIn Section 5, we design a practical algorithm for learning the conservatively-extrapolated value function by a negative sampling technique inspired by work on learning embeddings Mikolov et al. (2013); Gutmann & Hyvärinen (2012). We also learn a dynamical model by standard supervised learning so that we compute actions by maximizing the values of the predicted next states. This algorithm does not use any additional environment interactions, and we show that it empirically helps correct errors of the behavioral cloning policy.\nWhen additional environment interactions are available, we use the learned value function and the dynamical model to initialize an RL algorithm. This approach relieves the inefficiency in the prior work (Hester et al., 2018; Nair et al., 2018; Rajeswaran et al., 2017) that the randomly-initialized Q functions require a significant amount of time and samples to be warmed up, even though the initial policy already has a non-trivial success rate. Empirically, the proposed algorithm outperforms the prior work in the number of environment interactions needed to achieve near-optimal success rate.\nIn summary, our main contributions are: 1) we formalize the notion of values functions with conservative extrapolation which are proved to induce policies that stay close to demonstration states and achieve near-optimal performances, 2) we propose the algorithm Value Iteration with Negative Sampling (VINS) that outperforms behavioral cloning on three simulated robotics benchmark tasks with sparse rewards, and 3) we show that initializing an RL algorithm from VINS outperforms prior work in sample efficiency on the same set of benchmark tasks." }, { "heading": "2 RELATED WORK", "text": "Imitation learning. Imitation learning is commonly adopted as a standard approach in robotics (Pomerleau, 1989; Schaal, 1997; Argall et al., 2009; Osa et al., 2017; Ye & Alterovitz, 2017; Aleotti & Caselli, 2006; Lawitzky et al., 2012; Torabi et al., 2018; Le et al., 2017; 2018) and many other areas such as playing games (Mnih et al., 2013). Behavioral cloning (Bain & Sommut, 1999) is one of the underlying central approaches. See Osa et al. (2018) for a thorough survey and more references therein. If we are allowed to access an expert policy (instead of trajectories) or an approximate value function, in the training time or in the phase of collecting demonstrations, then, stronger algorithms can be designed, such as DAgger (Ross et al., 2011), AggreVaTe (Ross & Bagnell, 2014), AggreVaTeD (Sun et al., 2017), DART (Laskey et al., 2017), THOR Sun et al. (2018a). Our setting is that we have only clean demonstrations trajectories and a sparse reward (but we still hope to learn the self-correctable policy.)\nHo & Ermon (2016); Wang et al. (2017); Schroecker et al. (2018) successfully combine generative models in the setting where a large amount of environment interaction without rewards are allowed. The sample efficiency of (Ho & Ermon, 2016) has been improved in various ways, including maximum mean discrepancy minimization (Kim & Park, 2018), a Bayesian formulation of GAIL (Jeon et al., 2018), using an off-policy RL algorithm and solving reward bias problem (Kostrikov et al., 2018), and bypassing the learning of reward function (Sasaki et al., 2018). By contrast, we would like to minimize the amount of environment interactions needed, but are allowed to access a sparse reward. The work (Schroecker & Isbell, 2017) also aims to learn policies that can stay close to the demonstration sets, but through a quite different approach of estimating the true MAP estimate of the\npolicy. The algorithm also requires environment interactions, whereas one of our main goals is to improve upon behavioral cloning without any environment interactions.\nInverse reinforcement learning (e.g., see (Abbeel & Ng, 2004; Ng et al., 2000; Ziebart et al., 2008; Finn et al., 2016a;b; Fu et al., 2017)) is another important and successful line of ideas for imitation learning. It relates to our approach in the sense that it aims to learn a reward function that the expert is optimizing. In contrast, we construct a model to learn the value function (of the trivial sparse reward R(s, a) = −1), rather than the reward function. Some of these works (e.g., (Finn et al., 2016a;b; Fu et al., 2017)) use techniques that are reminiscent of negative sampling or contrastive learning, although unlike our methods, they use “negative samples” that are sampled from the environments.\nLeveraging demonstrations for sample-efficient reinforcement learning. Demonstrations have been widely used to improve the efficiency of RL (Kim et al., 2013; Chemali & Lazaric, 2015; Piot et al., 2014; Sasaki et al., 2018), and a common paradigm for continuous state and action space is to initialize with RL algorithms with a good policy or Q function (Rajeswaran et al., 2017; Nair et al., 2018; Večerík et al., 2017; Hester et al., 2018; Gao et al., 2018). We experimentally compare with the previous state-of-the-art algorithm in Nair et al. (2018) on the same type of tasks. Gao et al. (2018) has introduced soft version of actor-critic to tackle the false extrapolation of Q in the argument of a when the action space is discrete. In contrast, we deal with the extrapolation of the states in a continuous state and action space.\nModel-based reinforcement learning. Even though we will learn a dynamical model in our algorithms, we do not use it to generate fictitious samples for planning. Instead, the learned dynamics are only used in combination with the value function to get a Q function. Therefore, we do not consider our algorithm as model-based techniques. We refer to (Kurutach et al., 2018; Clavera et al., 2018; Sun et al., 2018b; Chua et al., 2018; Sanchez-Gonzalez et al., 2018; Pascanu et al., 2017; Khansari-Zadeh & Billard, 2011; Luo et al., 2018) and the reference therein for recent work on model-based RL.\nOff-policy reinforcement learning There is a large body of prior works in the domain of off-policy RL, including extensions of policy gradient (Gu et al., 2016; Degris et al., 2012; Wang et al., 2016) or Q-learning (Watkins & Dayan, 1992; Haarnoja et al., 2018; Munos et al., 2016). Fujimoto et al. (2018a) propose to solve off-policy reinforcement learning by constraining the action space, and Fujimoto et al. (2018c) use double Q-learning (Van Hasselt et al., 2016) to alleviate the optimistic extrapolation issue. In contrast, our method adjusts the erroneously extrapolated value function by explicitly penalizing the unseen states (which is customized to the particular demonstration offpolicy data). For most of the off-policy methods, their convergence are based on the assumption of visiting each state-action pair sufficiently many times. In the learning from demonstration setting, the demonstrations states are highly biased or structured; thus off-policy method may not be able to learn much from the demonstrations." }, { "heading": "3 PROBLEM SETUP AND CHALLENGES", "text": "We consider a setting with a deterministic MDP with continuous state and action space, and sparse rewards. Let S = Rd be the state space andA = Rk be the action space, and letM? : Rd×Rk → Rd be the deterministic dynamics. At test time, a random initial state s0 is generated from some distribution Ds0 . We assume Ds0 has a low-dimensional bounded support because typically initial states have special structures. We aim to find a policy π such that executing π from state s0 will lead to a set of goal states G. All the goal states are terminal states, and we run the policy for at most T steps if none of the goal states is reached.\nLet τ = (s0, a1, s1, . . . , ) be the trajectory obtained by executing a deterministic policy π from s0, where at = π(st), and st+1 = M?(st, at). The success rate of the policy π is defined as\nsucc(π) = E [1{∃t ≤ T, st ∈ G}] (3.1)\nwhere the expectation is taken over the randomness of s0. Note that the problem comes with a natural sparse reward: R(s, a) = −1 for every s and a. This will encourage reaching the goal with as small number of steps as possible: the total payoff of a trajectory is equal to negative the number of steps if the trajectory succeeds, or −T otherwise.\nLet πe be an expert policy 1 from which a set of n demonstrations are sampled. Concretely, n independent initial states {s(i)0 }ni=1 from Ds0 are generated, and the expert executes πe to collect a set of n trajectories {τ (i)}ni=1. We only have the access to the trajectories but not the expert policy itself. We will design algorithms for two different settings:\nImitation learning without environment interactions: The goal is to learn a policy π from the demonstration trajectories {τ (i)}ni=1 without having any additional interactions with the environment. Leveraging demonstrations in reinforcement learning: Here, in addition to the demonstrations, we can also interact with the environment (by sampling s0 ∼ Ds0 and executing a policy) and observe if the trajectory reaches the goal. We aim is to minimize the amount of environment interactions by efficiently leveraging the demonstrations.\nLet U be the set of states that can be visited by the demonstration policy from a random state s0 with positive probability. Throughout this paper, we consider the situation where the set U is only a small subset or a low-dimensional manifold of the entire state space. This is typical for continuous state space control problems in robotics, because the expert policy may only visit a very special kind of states that are the most efficient for reaching the goal. For example, in the toy example in Figure 1, the set U only contains those entries with black edges.2\nTo put our theoretical motivation in Section 4 into context, next we summarize a few challenges of imitation learning that are particularly caused by that U is only a small subset of the state space. Cascading errors for behavioral cloning. As pointed out by Bagnell (2015); Ross & Bagnell (2010), the errors of the policy can compound into a long sequence of mistakes and in the worst case cascade quadratically in the number of time steps T . From a statistical point of view, the fundamental issue is that the distribution of the states that a learned policy may encounter is different from the demonstration state distribution. Concretely, the behavioral cloning πBC performs well on the states in U but not on those states far away from U . However, small errors of the learned policy can drive the state to leave U , and then the errors compound as we move further and further away from U . As shown in Section 4, our key idea is to design policies that correct themselves to stay close to the set U . Degeneracy in learning value or Q functions from only demonstrations. When U is a small subset or a low-dimensional manifold of the state space, off-policy evaluation of V πe and Qπe is fundamentally problematic in the following sense. The expert policy πe is not uniquely defined outside U because any arbitrary extension of πe outside U would not affect the performance of the expert policy (because those states outside U will never be visited by πe from s0 ∼ Ds0 ). As a result, the value function V πe and Qπe is not uniquely defined outside U . In Section 4, we will propose a conservative extrapolation of the value function that encourages the policy to stay close to U . Fitting Qπe is in fact even more problematic. We refer to Section A for detailed discussions and why our approach can alleviate the problem.\nSuccess and challenges of initializing RL with imitation learning. A successful paradigm for sample-efficient RL is to initialize the RL policy by some coarse imitation learning algorithm such as BC (Rajeswaran et al., 2017; Večerík et al., 2017; Hester et al., 2018; Nair et al., 2018; Gao et al., 2018). However, the authors suspect that the method can still be improved, because the value function or the Q function are only randomly initialized so that many samples are burned to warm them up. As alluded before and shown in Section 4, we will propose a way to learn a value function from the demonstrations so that the following RL algorithm can be initialized by a policy, value function, and Q function (which is a composition of value and dynamical model) and thus converge faster." }, { "heading": "4 THEORETICAL MOTIVATIONS", "text": "In this section, we formalize our key intuition that the ideal extrapolation of the value function V πe should be that the values should decrease as we get further and further from the demonstrations. Recall that we use U to denote the set of states reachable by the expert policy from any initial state s0\n1In this work, we only consider deterministic expert policies. 2One may imagine that U can be a more diverse set if the demonstrations are more diverse, but an expert will\nnot visit entries on the top or bottom few rows, because they are not on any optimal routes to the goal state.\nGoal\nFigure 2: Illustration of the correction effect. A conservatively-extrapolated value function V , as shown in the figure, has lower values further away from U , and therefore the gradients of V point towards U . With such a value function, suppose we are at state s which is ε-close to U . The locallycorrectable assumption of the dynamics assumes the existence of acx that will drive us to state scx that is closer to U than s. Since scx has a relatively higher value compared to other possible future states that are further away from U (e.g., s′ shown in the figure), scx will be preferred by the optimization (4.3). In other words, if an action a leads to state s with large distance to U , the action won’t be picked by (4.3) because it cannot beat acx.\ndrawn with positive probability from Ds0 . 3 We use ‖ · ‖ to denote a norm in Euclidean space Rd. Let ΠU (s) be the projection of s ∈ Rd to a set U ⊂ Rd (according to the norm ‖ · ‖) 4. We introduce the notion of value functions with conservative extrapolation which matches V πe on the demonstration states U and has smaller values outside U . As formally defined in equation (4.1) and (4.2) in Alg. 1, we extrapolate V πe in a way that the value at s 6∈ U is decided by the value of its nearest neighbor in U (that is V πe(ΠU (s)), and its distance to the nearest neighbor (that is, ‖s−ΠU (s)‖). We allow a δV > 0 error because exact fitting inside or outside U would be impossible.\nAlgorithm 1 Self-correctable policy induced from a value function with conservative extrapolation Require: conservatively-extrapolated values V satisfying\nV (s) = V πe(s)± δV , if s ∈ U (4.1) V (s) = V πe(ΠU (s))− λ‖s−ΠU (s)‖ ± δV if s 6∈ U (4.2)\nand a locally approximately correct dynamics M and BC policy πBC satisfying Assumption (4.1).\nSelf-correctable policy π:\nπ(s) , argmax a:‖a−πBC(s)‖≤ζ V (M(s, a)) (4.3)\nBesides a conservatively-extrapolated value function V , our Alg. 1 relies on a learned dynamical model M and a behavioral cloning policy πBC. With these, the policy returns the action with the maximum value of the predicted next state in around the action of the BC policy. In other words, the policy π attempts to re-adjust the BC policy locally by maximizing the value of the next state.\nTowards analyzing Alg. 1, we will make a few assumptions. We first assume that the BC policy is correct in the set U , and the dynamical model M is locally correct around the set U and the BC actions. Note that these are significantly weaker than assuming that the BC policy is globally correct (which is impossible to ensure) and that the model M is globally correct. Assumption 4.1 (Local errors in learned dynamics and BC policy). We assume the BC policy πBC makes at most δπ error in U : for all s ∈ U , we have ‖πBC(s)− πe(s)‖ ≤ δπ. We also assume that the learned dynamics M has δM error locally around U and the BC actions in the sense that for all s that is ε-close to U , and any action that is ζ-close to πBC(s), we have ‖M(s, a)−M?(s, a)‖ ≤ δM .\nWe make another crucial assumption on the stability/correctability of the true dynamics. The following assumption essentially says that if we are at a state that is near the demonstration set, then there exists an action that can drive us closer to the demonstration set. This assumption rules out certain dynamics that does not allow corrections even after the policy making a small error. For example, if\n3Recall that we assume that Ds0 has a low-dimensional support and thus typically U will also be a lowdimensional subset of the ambient space.\n4Any tiebreaker can be used if there are multiple closest points.\na robot, unfortunately, falls off a cliff, then fundamentally it cannot recover itself — our algorithm cannot deal with such pathological situations.\nAssumption 4.2 (Locally-correctable dynamics). For some γ ∈ (0, 1) and ε > 0, Lc > 0, we assume that the dynamics M? is (γ, Lc, ε)-locally-correctable w.r.t to the set U in the sense that for all ε0 ∈ (0, ε] and any tuple (s̄, ā, s̄′) satisfying s̄, s̄′ ∈ U and s̄′ = M?(s̄, ā), and any ε0-perturbation s of s̄ (that is, s ∈ Nε0(s̄)), there exists an action acx that is Lcε0 close to ā, such that it makes a correction in the sense that the resulting state s′ is γε0-close to the set U : s′ = M?(s, acx) ∈ Nγε0(U). Here Nδ(K) denotes the set of points that are δ-close to K.\nFinally, we will assume the BC policy, the value function, and the dynamics are all Lipschitz in their arguments.5 We also assume the projection operator to the set U is locally Lipschitz. These are regularity conditions that provide loose local extrapolation of these functions, and they are satisfied by parameterized neural networks that are used to model these functions.\nAssumption 4.3 (Lipschitz-ness of policy, value function, and dynamics). We assume that the policy πBC is Lπ-Lipschitz. That is, ‖πBC(s) − πBC(s̃)‖ ≤ Lπ‖s − s̃‖ for all s, s̃. We assume the value function V πe and the learned value function V are LV -Lipschitz, the model M? is LM,aLipschitz w.r.t to the action and LM,s-Lipschitz w.r.t to the state s. We also assume that the set U has LΠ-Lipschitz projection locally: for all s, ŝ that is ε-close to U , ‖ΠU (s)−ΠU (ŝ)‖ ≤ LΠ‖s− ŝ‖.\nUnder these assumptions, now we are ready to state our main theorem. It claims that 1) the induced policy π in Alg. 1 stays close to the demonstration set and performs similarly to the expert policy πe, and 2) following the induced policy π, we will arrive at a state with a near-optimal value.\nTheorem 4.4. Suppose Assumption 4.1, 4.2, 4.3 hold with sufficiently small ε > 0 and errors δM , δπ,δπ > 0 so that they satisfy ζ ≥ Lcε + δπ + Lπ. Let λ be sufficiently large so that λ ≥ 2LV LΠLMζ+2δV +2LV δM\n(1−γ)ε . Then, the policy π from equation (4.3) satisfies the following:\n1. Starting from s0 ∈ U and executing policy π for T0 ≤ T steps, the resulting states s1, . . . , sT0 are all ε-close to the demonstrate states set U .\n2. In addition, suppose the expert policy makes at least ρ improvement every step in the sense that for every s ∈ U , either V πe(M?(s, πe(s))) ≥ V πe(s) + ρ or M?(s, πe(s)) reaches the goal.6 Assume ε and δM , δV , δπ are small enough so that they satisfy ρ & ε+ δπ .\nThen, the policy π will achieve a state sT with T ≤ 2|V πe(s0)|/ρ steps which is ε-close to a state s̄T with value at least V πe(sT ) & −(ε+ δπ).7\nThe proof is deferred to Section B. The first bullet follows inductively invoking the following lemma which states that if the current state is ε-close to U , then so is the next state. The proof of the Lemma is the most mathematically involved part of the paper and is deferred to the Section B. We demonstrate the key idea of the proof in Figure 2 and its caption.\nLemma 4.5. In the setting of Theorem 4.4, suppose s is ε-close to the demonstration states set U . Suppose U , and let a = π(s) and s′ = M?(s, a). Then, s′ is also ε-close to the set U .\nWe effectively represent the Q function by V (M(s, a)) in Alg. 1. We argue in Section A.1 that this helps address the degeneracy issue when there are random goal states (which is the case in our experiments.)\nDiscussion: can we learn conservatively-extrapolated Q-function? We remark that we do not expect a conservative-extrapolated Q-functions would be helpful. The fundamental idea here is to penalize the value of unseen states so that the policy can self-correct. However, to learn a Q function that induces self-correctable policies, we should encourage unseen actions that can correct the trajectory, instead of penalize them just because they are not seen before. Therefore, it is crucial that the penalization is done on the unseen states (or V ) but not the unseen actions (or Q).\n5We note that technically when the reward function is R(s, a) = −1, the value function is not Lipschitz. This can be alleviated by considering a similar rewardR(s, a) = −α−β‖a‖2 which does not require additional information.\n6ρ is 1 when the reward is always −1 before achieving the goal. 7Here & hides multiplicative constant factors depending on the Lipschitz parameters LM,a, LM,s, Lπ, LV .\nAlgorithm 2 Value Iteration on Demonstrations with Negative Sampling (VINS) 1: R ← demonstration trajectories . No environment interaction will be used 2: Initialize value parameters φ̄ = φ and model parameters θ randomly 3: for i = 1, . . . , T do 4: sample mini-batch B of N transitions (s, a, r, s′) fromR 5: update φ to minimize Ltd(φ;B) + Lns(φ;B) 6: update θ to minimize loss Lmodel(θ;B) 7: update target network: φ̄← φ̄+ τ(φ− φ̄) 8: 9: function POLICY(s) 10: Option 1: a = πBC(s); Option 2: a = 0 11: sample k noises ξ1, . . . , ξk from Uniform[−1, 1]m . m is the dimension of action space 12: i∗ = argmaxi Vφ(Mθ(s, a+ αξi)) . α > 0 is a hyper-parameter 13: return a+ αξi∗" }, { "heading": "5 MAIN APPROACH", "text": "Learning value functions with negative sampling from demonstration trajectories. As motivated in Section 4 by Algorithm 1 and Theorem 4.4, we first develop a practical method that can learn a value function with conservative extrapolation, without environment interaction. Let Vφ be a value function parameterized by φ. Using the standard TD learning loss, we can ensure the value function to be accurate on the demonstration states U (i.e., to satisfy equation (4.1)). Let φ̄ be the target value function,8 the TD learning loss is defined as Ltd(φ) = E(s,a,s′)∼ρπe [( r(s, a) + Vφ̄(s ′)− Vφ(s) )2]\nwhere r(s, a) is the (sparse) reward, φ̄ is the parameter of the target network, ρπe is the distribution of the states-action-states tuples of the demonstrations. The crux of the ideas in this paper is to use a negative sampling technique to enforce the value function to satisfy conservative extrapolation requirement (4.2). It would be infeasible to enforce condition (4.2) for every s 6∈ U . Instead, we draw random “negative samples” s̃ from the neighborhood of U , and enforce the condition (4.2). This is inspired by the negative sampling approach widely used in NLP for training word embeddings Mikolov et al. (2013); Gutmann & Hyvärinen (2012). Concretely, we draw a sample s ∼ ρπe , create a random perturbation of s to get a point s̃ 6∈ U . and construct the following loss function:9\nLns(φ) = Es∼ρπe ,s̃∼perturb(s) ( Vφ̄(s)− λ‖s− s̃‖ − Vφ(s̃) )2 ,\nThe rationale of the loss function can be best seen in the situation when U is assumed to be a lowdimensional manifold in a high-dimensional state space. In this case, s̃ will be outside the manifold U with probability 1. Moreover, the random direction s̃− s is likely to be almost orthogonal to the tangent space of the manifold U , and thus s is a reasonable approximation of the projection of s̃ back to the U , and ‖s− s̃‖ is an approximation of ‖ΠU s̃− s̃‖. If U is not a manifold but a small subset of the state space, these properties may still likely to hold for a good fraction of s.\nWe only attempt to enforce condition (4.2) for states near U . This likely suffices because the induced policy is shown to always stay close to U . Empirically, we perturb s by adding a Gaussian noise. The loss function to learn Vφ is defined as L(φ) = Ltd(φ) + µLns(φ) for some constant µ > 0. For a mini-batch B of data, we define the corresponding empirical loss by L(φ;B) (similarly we define Ltd(φ;B) and Lns(φ;B)). The concrete iterative learning algorithm is described in line 1-7 of Algorithm 2 (except line 6 is for learning the dynamical model, described below.)\nLearning the dynamical model. We use standard supervised learning to train the model. We use `2 norm as the loss for model parameters θ instead of the more commonly used MSE loss, following the success of (Luo et al., 2018): Lmodel(θ) = E(s,a,s′)∼ρπe [‖Mθ(s, a)− s′‖2] . Optimization for policy. We don’t maintain an explicit policy but use an induced policy from Vφ and Mθ by optimizing equation (4.3). A natural choice would be using projected gradient ascent to\n8A target value function is widely used in RL to improve the stability of the training (Lillicrap et al., 2015; Mnih et al., 2015).\n9With slight abuse of notation, we use ρπe to denote both the distribution of (s, a, s′) tuple and the distribution of s of the expert trajectories.\noptimize equation (4.3). It’s also possible to use cross-entropy methods in (Kalashnikov et al., 2018) to optimize it. However, we found the random shooting suffices because the action space is relatively low-dimensional in our experiments. Moreover, the randomness introduced appears to reduce the overfitting of the model and value function slightly. As shown in line 10-13 of Alg. 2, we sample k actions in the feasible set and choose the one with maximum Vφ(Mθ(s, a)).\nValue iteration with environment interaction. As alluded before, when more environment interactions are allowed, we initialize an RL algorithm by the value function, dynamics learned from VINS. Given that we have V and M in hand, we alternate between fitted value iterations for updating the value function and supervised learning for updating the models. (See Algorithm 3 in Section C.) We do not use negative sampling here since the RL algorithms already collect bad trajectories automatically. We also do not hallucinate any goals as in HER (Andrychowicz et al., 2017)." }, { "heading": "6 EXPERIMENTS", "text": "Environments. We evaluate our algorithms in three simulated robotics environments10 designed by (Plappert et al., 2018) based on OpenAI Gym (Brockman et al., 2016) and MuJoCo (Todorov et al., 2012): Reach, Pick-And-Place, and Push. A detailed description can be found in Section D.1.\nDemonstrations. For each task, we use Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) to train a policy until convergence. The policy rolls out to collect 100/200 successful trajectories as demonstrations except for Reach environment where 100 successful trajectories are sufficient for most of the algorithms to achieve optimal policy. We filtered out unsuccessful trajectories during data collection.\nWe consider two settings: imitation learning from only demonstrations data, and leveraging demonstration in RL with a limited amount of interactions. We compare our algorithm with Behavioral Cloning and multiple variants of our algorithms in the first setting. We compare with the previous state-of-the-art by Nair et al. (2018), and GAIL Ho & Ermon (2016) in the second setting. We do not compare with (Gao et al., 2018) because it cannot be applied to the case with continuous actions.\nBehavioral Cloning (Bain & Sommut, 1999). Behavioral Cloning (BC) learns a mapping from a state to an action on demonstration data using supervised learning. We use MSE loss for predicting the actions.\nNair et al.’18 (Nair et al., 2018). The previous state-of-the-art algorithm from Nair et al. (2018) combines HER (Andrychowicz et al., 2017) with BC and a few techniques: 1) an additional replay buffer filled with demonstrations, 2) an additional behavioral cloning loss for the policy, 3) a Q-filter for non-optimal demonstrations, 4) resets to states in the demonstrations to deal with long horizon tasks. We note that reseting to an arbitrary state may not be realistic for real-world applications in robotics. In contrast, our algorithm does not require resetting to a demonstration state.\nGAIL (Ho & Ermon, 2016) Generative Adversarial Imitation Learning (GAIL) imitates the expert by matching the state-action distribution with a GAN-like framework.\nHER (Andrychowicz et al., 2017) Hindsight Experience Replay (HER) is the one of the best techniques that deal with sparse-reward environments with multiple goals and can be combined with any off-policy RL algorithm. The key idea is that HER extends the replay buffer by changing the goals. With reasonable chosen goals, the underlying off-policy RL algorihtm can receive more signals from the generated experience, making policy optimization more complete.\nDAC (Kostrikov et al., 2018) Discriminator-Actor-Critic (DAC) is a sample-efficient imitation learning algorihtm built on the top of GAIL. It addresses the reward bias problem by adapting AIRL reward function and introducing an absorbing state. Furthermore, it replaces the underlying RL algorithm in GAIL by TD3 (Fujimoto et al., 2018b) to make it more sample efficient.\nVINS. As described in Section 5, in the setting without environment interaction, we use Algorithm 2; otherwise we use it to initialize an RL algorithm (see Algorithm 3). We use neural networks to parameterize the value function and the dynamics model. The granularity of the HER demonstration policy is very coarse, and we argument the data with additional linear interpolation between consecutive states. We also use only a subset of the states as inputs to the value function and the dynamics model,\n10Available at https://github.com/openai/gym/tree/master/gym/envs/robotics.\nwhich apparently helps improve the training and generalization of them. Implementation details can be found in Section D.2.\nOur main results are reported in Table 1 11 for the setting with no environment interaction and Figure 3 for the setting with environment interactions. Table 1 shows that the Reach environment is too simple so that we do not need to run the RL algorithm. On the harder environments Pick-And-Place and Push, our algorithm VINS outperforms BC. We believe this is because our conservatively-extrapolated value function helps correct the mistakes in the policy. Here we use 2k trials to estimate the success rate (so that the errors in the estimation is negligible), and we run the algorithms with 10 different seeds. The error bars are for 1 standard error.\nFigure 3 shows that VINS initialized RL algorithm outperforms prior state-of-the-art in sample efficiency. We believe the main reason is that due to the initialization of value and model, we pay less samples for warming up the value function. We note that our initial success rate in RL is slightly lower than the final result of VINS in Table 1. This is because in RL we implemented a slightly worse variant of the policy induced by VINS: in the policy of Algorithm 2, we use option 2 to search the action uniformly. This suffices because the additional interactions quickly allows us to learn a good model and the BC constraint is no longer needed.\nAblation studies. Towards understanding the effect of each component of VINS, we perform three ablative experiments to show the importance of negative sampling, searching in the neighborhood of Behavioral Cloned actions (option 1 in line 10 or Algorithm 2), and a good dynamics model. The results are shown in Table 2. We study three settings: (1) VINS without negative sampling (VINS w/o NS), where the loss Lns is removed; (2) VINS without BC (VINS w/o BC), where option 2 in line 10 or Algorithm 2 is used; (3) VINS with oracle model without BC (VINS w/ oracle w/o BC), where we use the true dynamics model to replace line 12 of Algorithm 2. Note that the last setting is only synthetic for ablation study because in the real-world we don’t have access to the true dynamics model. Please see the caption of Table 2 for more interpretations. We use the same set of hyperparameters for the same environment, which may not be optimal: for example, with more expert trajectories, the negative sampling loss Lns, which can be seen as a regularziation, should be assigned a smaller coefficient µ." }, { "heading": "7 CONCLUSION", "text": "We devise a new algorithm, VINS, that can learn self-correctable by learning value function and dynamical model from demonstrations. The key idea is a theoretical formulation of conservativelyextrapolated value functions that provably leads to self-correction. The empirical results show a promising performance of VINS and an algorithm that initializes RL with VINS. It’s a fascinating direction to study other algorithms that may learn conservatively-extrapolated value functions in\n11The standard error in the paper means the standard error of average success rate over 10 (100 for Reach 10) different random seeds by the same algorithm, that is, the standard deviation of 10 numbers over √ 10 (or 10, respectively). 12The curve for Nair et al.’s only starts after a few thousands steps because the code we use https: //github.com/jangirrishabh/Overcoming-exploration-from-demos only evaluates after the first epoch.\nother real-world applications beyond the proof-of-concepts experiments in this paper. For example, the negative sampling by Gaussian perturbation technique in this paper may not make sense for high-dimensional pixel observation. The negative sampling can perhaps be done in the representation space (which might be learned by unsupervised learning algorithms) so that we can capture the geometry of the state space better." }, { "heading": "A DEGENERACY OF LEARNING Q FUNCTIONS FROM DEMONSTRATIONS AND OUR SOLUTIONS", "text": "Fitting Qπe from only demonstration is problematic: there exists a function Q(s, a) that does not depend on a at all, which can still match Qπe on all possible demonstration data. Consider Q(s, a) , Qπe(s, πe(s)). We can verify that for any (s, a) pair in the demonstrations satisfying a = πe(s), it holds that Q(s, a) = Qπe(s, a). However, Q(s, a) cannot be accurate for other choices of a’s because by its definition, it does not use any information from the action a.\nA.1 COPING WITH THE DEGENERACY WITH LEARNED DYNAMICAL MODEL\nCautious reader may realize that the degeneracy problem about learning Q function from demonstrations with deterministic policies may also occur with learning the model M . However, we will show that when the problem has the particular structure of reaching a random but given goal state g, learning Q still suffers the degeneracy but learning the dynamics does not.\nThe typical way to deal with a random goal state g is to consider goal-conditioned value function V (s, g), policy π(s, g), and Q function Q(s, a, g).13 However, the dynamical model does not have to condition on the goal. Learning Q function still suffers from the degeneracy problem because Q(s, a, g) , Qπe(s, πe(s, g), g) matches Qπe on the demonstrations but does not use the information from a at all. However, learning M does not suffer from such degeneracy because given a single state s in the demonstration, there is still a variety of action a that can be applied to state s because there are multiple possible goals g. (In other words, we cannot construct a pathological M(s, a) = M?(s, πe(s)) because the policy also takes in g as an input). As a result, parameterizing Q by Q(s, a, g) = V (M(s, a), g) do not suffer from the degeneracy either." }, { "heading": "B MISSING PROOFS IN SECTION 4", "text": "Proof of Lemma 4.5. Let s̄ = ΠUs and ā = πe(s̄). Because s is ε-close to the set U , we have ‖s− s̄‖ ≤ ε. By the (γ, Lc, ε)-locally-correctable assumption of the dynamics, we have that there exists an action acx such that a) ‖acx − ā‖ ≤ Lcε and b) s′cx , M?(s, acx) is γε-close to the set U . Next we show that acx belongs to the constraint set {a : ‖a− πBC(s)‖ ≤ ζ} in equation (4.3). Note that ‖acx−πBC(s)‖ ≤ ‖acx− ā‖+‖ā−πBC(s̄)‖+‖πBC(s̄)−πBC(s)‖ ≤ Lcε+δπ+Lπε because of triangle inequality, the closeness of acx and ā, the assumption that πBC has δ error in the demonstration state set U , and the Lipschitzness of πBC. Since ζ is chosen to be bigger than Lcε+ δπ + Lπε, we conclude that acx belongs to the constraint set of the optimization in equation (4.3).\nThis suggests that the maximum value of the optimization (4.3) is bigger than the corresponding value of acx:\nV (M(s, a)) ≥ V (M(s, acx)) (B.1)\nNote that a belongs to the constraint set by definition and therefore ‖a−acx‖ ≤ 2ζ . By Lipschitzness of the dynamical model, and the value function V πe , we have that ‖M?(s, a) −M?(s, acx)‖ ≤ LM‖a − acx‖ ≤ 2LMζ. Let s′ = M?(s, a) and s′cx = M?(s, acx). We have ‖s′ − s′cx‖ ≤ 2LMζ. By the Lipschitz projection assumption, we have that ‖ΠUs′−ΠUs′cx‖ ≤ LΠ‖s′− s′cx‖ ≤ 2LΠLMζ , which in turns implies that |V πe(ΠUs′)− V πe(ΠUs′cx)| ≤ 2LV LΠLMζ by Lipschitzness of V πe . It follows that\nV (s′) ≤ V πe(ΠUs′)− λ‖s′ −ΠUs′‖+ δV (by assumption (4.1)) ≤ V πe(ΠUs′cx) + |V πe(ΠUs′cx)− V πe(ΠUs′)| − λ‖s′ −ΠUs′‖+ δV\n(by triangle inequality)\n≤ V πe(ΠUs′cx) + 2LV LΠLMζ − λ‖s′ −ΠUs′‖+ δV (by equations in paragraph above) ≤ V (s′cx) + λ‖s′cx −Πs′cx‖+ 2LV LΠLMζ − λ‖s′ −ΠUs′‖+ 2δV (by assumption (4.2))\nNote that by the Lipschitzness of the value function and the assumption on the error of the dynamical model,\n|V (s′)− V (M(s, a))| = |V (M?(s, a))− V (M(s, a))| ≤ Lv‖M?(s, a)−M(s, a)‖ ≤ LV δM (B.2)\nSimlarly\n|V (s′cx)− V (M(s, acx))| = |V (M?(s, acx))− V (M(s, acx))| ≤ Lv‖MT ?(s, acx)−M(s, acx)‖ ≤ LV δM (B.3)\n13This is equivalent to viewing the random goal g as part of an extended state s̆ = (s, g). Here the second part of the extended state is randomly chosen during sampling the initial state, but never changed by the dynamics. Thus all of our previous work does apply to this situation via this reduction.\nCombining the three equations above, we obtain that\nλ‖s′cx −Πs′cx‖+ 2LV LΠLMζ − λ‖s′ −ΠUs′‖+ 2δV ≥ V (s′)− V (s′cx) ≥ V (M(s, a))− V (M(s, acx))− 2LV δM (by equation (B.2) and (B.3)) ≥ −2LV δM (by equation (B.1))\nLet κ = 2LV LΠLMζ + 2δV + 2LV δM and use the assumption that s′cx is γε-close to the set U (which implies that ‖s′cx −Πs′cx‖ ≤ γε), we obtain that\nλ‖s′ −ΠUs′‖ ≤ λ‖s′cx −Πs′cx‖+ κ ≤ λγε+ κ (B.4)\nNote that λ ≥ κ(1−γ)ε , we have that ‖s ′ −ΠUs′‖ ≤ ε.\nProof of Theorem 4.4. To prove bullet 1, we apply Lemma 4.5 inductively for T steps. To prove bullet 2, we will prove that as long as si is ε-close to U , then we can improve the value function by at least ρ−? in one step. Consider s̄i = ΠU (si). We triangle inequality, we have that ‖π(si)− πe(s̄i)‖ ≤ ‖π(si) − πBC(si)‖ + ‖πBC(si) − πBC(s̄i)‖ + ‖πBC(s̄i) − πe(s̄i)‖. These three terms can be bounded respectively by ζ, Lπ‖si − s̄i‖ ≤ Lπε, and δπ, using the definition of π, the Lipschitzness of πBC, and the error assumption of πBC on the demonstration state set U , respectively. It follows that ‖M?(si, π(si)) −M?(s̄i, πe(s̄i))‖ ≤ LM,s‖si − s̄i‖ + LM,a‖π(si) − πe(s̄i)‖ ≤ LM,sε+ LM,a(ζ + Lπε+ δπ). It follows by the Lipschitzness of the projection that\n‖s̄i+1 −ΠUM?(s̄i, πe(s̄i))‖ = ‖ΠUM?(si, π(si))−ΠUM?(s̄i, πe(s̄i))‖ (B.5) = |ΠUM?(si, π(si))−M?(s̄i, πe(s̄i)) (B.6) ≤ LΠ(LM,sε+ LM,a(ζ + Lπε+ δπ)) (B.7)\nThis implies that\n|V πe(s̄i+1)− V πe(ΠUM?(s̄i, πe(s̄i))| ≤ LV LΠ(LM,sε+ LM,a(ζ + Lπε+ δπ)) (B.8)\nNote that we assumed that V (M?(s̄i, πe(s̄i)) ≥ V (s̄i) + ρ or M?(s̄i, πe(s̄i)) reaches the goal. If the former, it follows that V πe(s̄i+1) ≥ V πe(s̄i) + ρ − LV LΠ(LM,sε + LM,a(ζ + Lπε + δπ)) ≥ V πe(s̄i) + ρ/2. Otherwise, or si+1 is ε-close to s̄i+1 whose value is at most −LV LΠ(LM,sε + LM,a(ζ + Lπε+ δπ)) = −O(ε+ δπ)" }, { "heading": "C ALGORITHMS VINS+RL", "text": "A pseudo-code our algorithm VINS+RL can be found in Algorithm 3\nAlgorithm 3 Value Iteration with Environment Interactions Initialized by VINS (VINS+RL) Require: Initialize parameters φ, θ from the result of VINS (Algorithm 2)\n1: R ← demonstration trajectories; 2: for stage t = 1, . . . do 3: collect n1 samples using the induced policy π in Algorithm 2 (with Option 2 in Line 10) and\nadd them toR 4: for i = 1, . . . , ninner do 5: sample mini-batch B of N transitions (s, a, r, s′) fromR 6: update φ to minimize Ltd(φ;B) 7: update target value network: φ̄← φ̄+ τ(φ− φ̄) 8: update θ to minimize loss Lmodel(θ;B)\nD IMPLEMENTATION DETAILS\nD.1 SETUP\nIn the three environments, a 7-DoF robotics arm is manipulated for different goals. In Reach, the task is to reach a randomly sampled target location; In Pick-And-Place, the goal is to grasp a box in a table and reach a target location, and in Push, the goal is to push a box to a target location.\nThe reward function is 0 if the goal is reached; otherwise -1. Intuitively, an optimal agent should complete the task in a shortest possible path. The environment will stop once the agent achieves the goal or max step number have been reached. Reaching max step will be regarded as failure.\nFor more details, we refer the readers to (Plappert et al., 2018).\nD.2 HYPERPARAMETERS\nBehavioral Cloning We use a feed-forward neural network with 3 hidden layers, each containinig 256 hidden units, and ReLU activation functions. We train the network until the test success rate plateaus.\nNair et al.’18 (Nair et al., 2018) We use the implementation from https://github.com/ jangirrishabh/Overcoming-exploration-from-demos. We don’t change the default hyperparameters, except that we’re using 17 CPUs.\nGAIL (Ho & Ermon, 2016) We use the implementation from OpenAI Baselines (Dhariwal et al., 2017). We don’t change the default hyperparameters.\nHER (Andrychowicz et al., 2017) We also use the code from OpenAI Baselines and keep the default hyperparameters.\nDiscriminator-Actor-Critic (Kostrikov et al., 2018) We use the implementation from the official implementation https://github.com/google-research/google-research/ tree/master/dac.\nVINS\n• Architecture: We use feed-forward neural networks as function approximators for values and dynamical models. For the Vφ, the network has one hidden layer which has 256 hidden units and a layer normalization layer (Lei Ba et al., 2016). The dynamics model is a feed-forward neural network with two hidden layers and ReLU activation function. Each hidden layer has 500 units. The model uses the reduced states and actions to predict the next reduced states.\n• Value augmentation: We augment the dataset by a linear interpolation between two consecutive states, i.e., for a transition (s, a, r, s′) in the demonstration, it’s augmented to (s+ λ(s′ − s), a, λr, s′) for a random real λ ∼ Uniform[0, 1]. To minimize the losses, we use the Adam optimizer Kingma & Ba (2014) with learning rate 3 × 10−4. We remove some less relevant coordinates from the state space to make the dimension smaller. (But we maintain the full state space for BC. BC will perform worse with reduce state space.) Specifically, the states of our algorithm for different environment are a) Reach: the position of the arm, b) Pick: the position of the block, the gripper, and the arm, and c) Push: the position of the block and the arm.\n• States representation and perturbation: Both our model M and value function V are trained on the space of reduced states. The perturb function is also designed separately for each task. For Reach and Push, we perturb the arm only; For Pick-And-Place, we perturb the arm or the gripper. In the implementation, we perturb the state s by adding Gaussian noise from a distribution N (0, ρΣ), where Σ is a diagonal matrix that contains the variances of each coordinate of the states from the demonstration trajectories. Here ρ > 0 is a hyper-parameter to tune." }, { "heading": "E ADDITIONAL EXPERIMENTS", "text": "We have some additional experiments to evaluate VINS, which are (a) BC with data augmentation (b) VINS with oracle without negative sampling, which are explained below. The results are summarized in Table 3.\nBC w/ data augmentation. As VINS augments the data to train the value function, it might be interesting to see the performance of BC w/ data augmentation to make the comparison more fair. The way we augment the data is similar to what we have done for the value function: Given two consecutive state-action pairs (s, a) and (s′, a′), where s′ = M(s, a), we sample t ∼ Uniform[0, 1] and construct a new pair (ts+ (1− t)s′, ta+ (1− t)a′). The new pair is used for behavior cloning. However, we found out that this kind of data augmentation hurts. We hypothesize that the augmented data provide incorrect information, as the actions between two consecutive states might be very non-linear, while value is more linear. We suspect that the biases/errors introduced by the data augmentation is particularly harmful for BC, because it lacks a mechanism of self-correction that VINS has.\nVINSw/ oracle w/o negative sampling. Another way to isolatethe effect of negative sampling is comparing VINS w/ oracle w/o NS to VINS w/ oracle. The results are shown at Table 3. We can see that VINS w/ oracle achieves a significantly higher success rate than VINS w/ oracle w/o negative sampling, which shows that negative sampling is essential to the success of VINS.\nVINS with fewer demonstrations. To evaluate whether VINS works with few demonstrations, we also conduct our experiments wih only 40 expert trajectories. The result is shown in Table 4. The performance of BC is worse than that with 100 expert trajectories, VINS can still correct some actions via learned value function and model and achieves a higher success rate.\nVINS with stochatic dynamics model. As this paper mainly studies deterministic dynamics model, we’d like to note that our theory can be generalized to stochastic dynamics model easily. We also conduct proof-of-concept experiments to show the performance of VINS in this case. The stochastic environment we used in the experiments is based on Push, and the modification is that the dynamics model adds noises to the received actions, i.e., s′ ∼M(s, a+ ζ) where ζ ∼ N (0, σ2I), M here is the original deterministic dynamics model. One intuitive interpretation is that the actuator on the robot might not be good enough to produce the exact forces. For simplicity we don’t train a stochastic dynamics model but learn a deterministic one to approximate it. The result is summarized in Table 4, in which we can find even with a determinisic dynamics model, VINS performs better than BC." } ]
2,020
null
SP:c2dfaba3df490671f8ce20bf69df96d0887aa19d
[ "The authors propose a prediction model for directed acyclic graphs (DAGs) over a fixed set of vertices based on a neural network. The present work follows the previous work on undirected acyclic graphs, where the key constraint is (3), ensuring the acyclic property. The proposed method performed favorably on artificial/real data compared to previous baselines. ", "This work addresses the problem of learning the structure of directed acyclic graphs in the presence of nonlinearities. The proposed approach is an extension of the NOTEARS algorithm which uses a neural network for each node in the graph during structure learning. This adaptation allows for non-linear relationships to be easily modeled. In addition to the proposed adaptation, the authors employ a number of heuristics from the causal discovery literature to improve the efficacy of the search. Empirical results are provided which compare the proposed algorithm to prior art. " ]
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while avoiding the combinatorial nature of the problem. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks, while being competitive with existing greedy search methods on important metrics for causal inference.
[ { "affiliations": [], "name": "Sébastien Lachapelle" }, { "affiliations": [], "name": "Philippe Brouillard" }, { "affiliations": [], "name": "Tristan Deleu" }, { "affiliations": [], "name": "Simon Lacoste-Julien" } ]
[ { "authors": [ "J. Alayrac", "P. Bojanowski", "N. Agrawal", "J. Sivic", "I. Laptev", "S. Lacoste-Julien" ], "title": "Learning from narrated instruction", "venue": "videos. TPAMI,", "year": 2018 }, { "authors": [ "A.-L. Barabási" ], "title": "Scale-free networks: a decade and beyond", "venue": "Science,", "year": 2009 }, { "authors": [ "A.-L. Barabási", "R. Albert" ], "title": "Emergence of scaling in random networks", "venue": "Science,", "year": 1999 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "D.P. Bertsekas" ], "title": "Nonlinear Programming", "venue": "Athena Scientific,", "year": 1999 }, { "authors": [ "P. Bühlmann", "J. Peters", "J. Ernest" ], "title": "CAM: Causal additive models, high-dimensional order search and penalized regression", "venue": "Annals of Statistics,", "year": 2014 }, { "authors": [ "D.M. Chickering" ], "title": "Optimal structure identification with greedy search", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "A. Clauset", "C.R. Shalizi", "M.E.J. Newman" ], "title": "Power-law distributions in empirical data", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "J. Cussens" ], "title": "Bayesian network learning with cutting planes", "venue": "In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence,", "year": 2011 }, { "authors": [ "M. Germain", "K. Gregor", "I. Murray", "H. Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "P. Geurts", "D. Ernst", "L. Wehenkel" ], "title": "Extremely randomized trees", "venue": "Machine learning,", "year": 2006 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "X. Glorot", "A. Bordes", "Y. Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "I. Goodfellow", "Y. Bengio", "A. Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "O. Goudet", "D. Kalainathan", "P. Caillou", "D. Lopez-Paz", "I. Guyon", "M. Sebag" ], "title": "Learning Functional Causal Models with Generative Neural Networks. In Explainable and Interpretable Models in Computer Vision and Machine Learning", "venue": null, "year": 2018 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Yizhu Lin", "Bernhard Schölkopf", "Clark Glymour" ], "title": "Generalized score functions for causal discovery", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery", "year": 2018 }, { "authors": [ "T. Jaakkola", "D. Sontag", "A. Globerson", "M. Meila" ], "title": "Learning Bayesian Network Structure using LP Relaxations", "venue": "In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "D. Kalainathan", "O. Goudet", "I. Guyon", "D. Lopez-Paz", "M. Sebag" ], "title": "Sam: Structural agnostic model, causal discovery and penalized adversarial learning", "venue": "arXiv preprint arXiv:1803.04929,", "year": 2018 }, { "authors": [ "D. Koller", "N. Friedman" ], "title": "Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning", "venue": null, "year": 2009 }, { "authors": [ "S. Magliacane", "T. van Ommen", "T. Claassen", "S. Bongers", "P. Versteeg", "J.M. Mooij" ], "title": "Domain adaptation by using causal inference to predict invariant conditional distributions", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "J. Pearl" ], "title": "Causality: Models, Reasoning and Inference", "venue": null, "year": 2009 }, { "authors": [ "J. Pearl" ], "title": "The seven tools of causal inference, with reflections on machine learning", "venue": "Commun. ACM,", "year": 2019 }, { "authors": [ "J. Peters", "P. Bühlman" ], "title": "Identifiability of Gaussian structural equation models with equal error variances", "venue": null, "year": 2014 }, { "authors": [ "J. Peters", "P. Bühlmann" ], "title": "Structural intervention distance (sid) for evaluating causal graphs", "venue": "Neural Computation,", "year": 2013 }, { "authors": [ "J. Peters", "P. Bühlmann" ], "title": "Structural intervention distance (SID) for evaluating causal graphs", "venue": "Neural Computation,", "year": 2015 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Causal inference on discrete data using additive noise models", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2011 }, { "authors": [ "J. Peters", "J.M. Mooij", "D. Janzing", "B. Schölkopf" ], "title": "Causal discovery with continuous additive noise models", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Elements of Causal Inference - Foundations and Learning Algorithms", "venue": null, "year": 2017 }, { "authors": [ "T.A. Poggio", "K. Kawaguchi", "Q. Liao", "B. Miranda", "L. Rosasco", "X. Boix", "J. Hidary", "H. Mhaskar" ], "title": "Theory of deep learning III: explaining the non-overfitting puzzle", "venue": "arXiv preprint arXiv:1801.00173,", "year": 2018 }, { "authors": [ "L. Prechelt" ], "title": "Early stopping - but when", "venue": "In Neural Networks: Tricks of the Trade,", "year": 1997 }, { "authors": [ "J. Ramsey", "M. Glymour", "R. Sanchez-Romero", "C. Glymour" ], "title": "A million variables and more: the fast greedy equivalence search algorithm for learning high-dimensional graphical causal models, with an application to functional magnetic resonance", "venue": "images. I. J. Data Science and Analytics,", "year": 2017 }, { "authors": [ "K. Sachs", "O. Perez", "D. Pe’er", "D.A. Lauffenburger", "G.P. Nolan" ], "title": "Causal protein-signaling networks derived from multiparameter single-cell data", "venue": null, "year": 2005 }, { "authors": [ "S. Shimizu", "P.O. Hoyer", "A. Hyvärinen", "A. Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "P. Spirtes", "C. Glymour", "R. Scheines" ], "title": "Causation, Prediction, and Search", "venue": "MIT press,", "year": 2000 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Tim Van den Bulcke" ], "title": "SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning", "venue": null, "year": 2006 }, { "authors": [ "Y. Yu", "J. Chen", "T. Gao", "M. Yu" ], "title": "DAG-GNN: DAG structure learning with graph neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "K. Zhang", "A. Hyvärinen" ], "title": "On the identifiability of the post-nonlinear causal model", "venue": "In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence,", "year": 2009 }, { "authors": [ "Kun Zhang", "Zhikun Wang", "Jiji Zhang", "Bernhard Schölkopf" ], "title": "On estimation of functional causal models: General results and application to the post-nonlinear causal model", "venue": "ACM Trans. Intell. Syst. Technol.,", "year": 2015 }, { "authors": [ "X. Zheng", "B. Aragam", "P.K. Ravikumar", "E.P. Xing" ], "title": "Dags with no tears: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Bühlmann" ], "title": "DAG performs a pruning step identical to CAM (Bühlmann et al., 2014) in order to remove spurious edges. We use the implementation of Bühlmann et al. (2014) based on the R function gamboost from the mboost package. For each variable Xi, a generalized additive model is fitted against the current parents of Xi and a significance test of covariates is applied. Parents with a p-value higher than", "venue": null, "year": 2014 }, { "authors": [ "CPDAG. See Peters", "Bühlmann" ], "title": "HYPERPARAMETERS All GraN-DAG runs up to this point were performed using the following set of hyperparameters. We used RMSprop as optimizer with learning rate of 10−2 for the first subproblem and 10−4 for all subsequent suproblems", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Structure learning and causal inference have many important applications in different areas of science such as genetics (Koller & Friedman, 2009; Peters et al., 2017), biology (Sachs et al., 2005) and economics (Pearl, 2009). Bayesian networks (BN), which encode conditional independencies using directed acyclic graphs (DAG), are powerful models which are both interpretable and computationally tractable. Causal graphical models (CGM) (Peters et al., 2017) are BNs which support interventional queries like: What will happen if someone external to the system intervenes on variable X? Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples, adaptability and explainability (Pearl, 2019; Magliacane et al., 2018). However, structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures (the number of DAGs grows super exponentially with the number of nodes) and the question of structure identifiability (see Section 2.2). Nevertheless, these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing.\nThe typical motivation for learning a causal graphical model is to predict the effect of various interventions. A CGM can be best estimated when given interventional data, but interventions are often costly or impossible to obtained. As an alternative, one can use exclusively observational data and rely on different assumptions which make the graph identifiable from the distribution (see Section 2.2). This is the approach employed in this paper.\nWe propose a score-based method (Koller & Friedman, 2009) for structure learning named GraNDAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem. In the original method named NOTEARS (Zheng et al., 2018), the directed graph is encoded as a weighted adjacency matrix which represents coefficients in a linear structural equation model (SEM) (Pearl, 2009) (see Section 2.3) and enforces acyclicity using a constraint which is both efficiently computable and easily differentiable, thus allowing the use of numerical solvers. This continuous approach improved upon popular methods while avoiding the design of greedy algorithms based on heuristics.\nOur first contribution is to extend the framework of Zheng et al. (2018) to deal with nonlinear relationships between variables using neural networks (NN) (Goodfellow et al., 2016). To adapt the acyclicity constraint to our nonlinear model, we use an argument similar to what is used in\n†Canada CIFAR AI Chair Correspondence to: sebastien.lachapelle@umontreal.ca\nZheng et al. (2018) and apply it first at the level of neural network paths and then at the level of graph paths. Although GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions, our experiments focus on the special case of nonlinear Gaussian additive noise models since, under specific assumptions, it provides appealing theoretical guarantees easing the comparison to other graph search procedures (see Section 2.2 & 3.3). On both synthetic and real-world tasks, we show GraN-DAG often outperforms other approaches which leverage the continuous paradigm, including DAG-GNN (Yu et al., 2019), a recent nonlinear extension of Zheng et al. (2018) which uses an evidence lower bound as score.\nOur second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures, namely CAM (Bühlmann et al., 2014) and GSF (Huang et al., 2018). We show that GraN-DAG is competitive on the wide range of tasks we considered, while using pre- and post-processing steps similar to CAM.\nWe provide an implementation of GraN-DAG here." }, { "heading": "2 BACKGROUND", "text": "Before presenting GraN-DAG, we review concepts relevant to structure and causal learning." }, { "heading": "2.1 CAUSAL GRAPHICAL MODELS", "text": "We suppose the natural phenomenon of interest can be described by a random vector X ∈ Rd entailed by an underlying CGM (PX ,G) where PX is a probability distribution over X and G = (V,E) is a DAG (Peters et al., 2017). Each node j ∈ V corresponds to exactly one variable in the system. Let πGj denote the set of parents of node j in G and let XπGj denote the random vector containing the variables corresponding to the parents of j in G. Throughout the paper, we assume there are no hidden variables. In a CGM, the distribution PX is said to be Markov to G, i.e. we can write the probability density function (pdf) of PX as p(x) = ∏d j=1 pj(xj |xπGj ) where pj(xj |xπGj ) is the conditional pdf of variable Xj given XπGj . A CGM can be thought of as a BN in which directed edges are given a causal meaning, allowing it to answer queries regarding interventional distributions (Koller & Friedman, 2009)." }, { "heading": "2.2 STRUCTURE IDENTIFIABILITY", "text": "In general, it is impossible to recover G given only samples from PX , i.e. without interventional data. It is, however, customary to rely on a set of assumptions to render the structure fully or partially identifiable.\nDefinition 1 Given a set of assumptions A on a CGM M = (PX ,G), its graph G is said to be identifiable from PX if there exists no other CGM M̃ = (P̃X , G̃) satisfying all assumptions in A such that G̃ 6= G and P̃X = PX .\nThere are many examples of graph identifiability results for continuous variables (Peters et al., 2014; Peters & Bühlman, 2014; Shimizu et al., 2006; Zhang & Hyvärinen, 2009) as well as for discrete variables (Peters et al., 2011). These results are obtained by assuming that the conditional densities belong to a specific parametric family. For example, if one assumes that the distribution PX is entailed by a structural equation model of the form\nXj := fj(XπGj ) +Nj with Nj ∼ N (0, σ2j ) ∀j ∈ V (1)\nwhere fj is a nonlinear function satisfying some mild regularity conditions and the noises Nj are mutually independent, then G is identifiable from PX (see Peters et al. (2014) for the complete theorem and its proof). This is a particular instance of additive noise models (ANM). We will make use of this result in our experiments in Section 4.\nOne can consider weaker assumptions such as faithfulness (Peters et al., 2017). This assumption allows one to identify, not G itself, but the Markov equivalence class to which it belongs (Spirtes\net al., 2000). A Markov equivalence class is a set of DAGs which encode exactly the same set of conditional independence statements and can be characterized by a graphical object named a completed partially directed acyclic graph (CPDAG) (Koller & Friedman, 2009; Peters et al., 2017). Some algorithms we use as baselines in Section 4 output only a CPDAG." }, { "heading": "2.3 NOTEARS: CONTINUOUS OPTIMIZATION FOR STRUCTURE LEARNING", "text": "Structure learning is the problem of learning G using a data set of n samples {x(1), ..., x(n)} from PX . Score-based approaches cast this problem as an optimization problem, i.e. Ĝ = arg maxG∈DAG S(G) where S(G) is a regularized maximum likelihood under graph G. Since the number of DAGs is super exponential in the number of nodes, most methods rely on various heuristic greedy search procedures to approximately solve the problem (see Section 5 for a review). We now present the work of Zheng et al. (2018) which proposes to cast this combinatorial optimization problem into a continuous constrained one.\nTo do so, the authors propose to encode the graph G on d nodes as a weighted adjacency matrix U = [u1| . . . |ud] ∈ Rd×d which represents (possibly negative) coefficients in a linear SEM of the form Xj := u>j X + Ni ∀j where Nj is a noise variable. Let GU be the directed graph associated with the SEM and let AU be the (binary) adjacency matrix associated with GU . One can see that the following equivalence holds:\n(AU )ij = 0 ⇐⇒ Uij = 0 (2)\nTo make sure GU is acyclic, the authors propose the following constraint on U :\nTr eU U − d = 0 (3)\nwhere eM , ∑∞ k=0 Mk k! is the matrix exponential and is the Hadamard product.\nTo see why this constraint characterizes acyclicity, first note that (AUk)jj is the number of cycles of length k passing through node j in graph GU . Clearly, for GU to be acyclic, we must have TrAU\nk = 0 for k = 1, 2, ...,∞. By equivalence (2), this is true when Tr(U U)k = 0 for k = 1, 2, ...,∞ . From there, one can simply apply the definition of the matrix exponential to see why constraint (3) characterizes acyclicity (see Zheng et al. (2018) for the full development).\nThe authors propose to use a regularized negative least square score (maximum likelihood for a Gaussian noise model). The resulting continuous constrained problem is\nmax U S(U,X) , − 1 2n ‖X−XU‖2F − λ‖U‖1 s.t. Tr eU U − d = 0 (4)\nwhere X ∈ Rn×d is the design matrix containing all n samples. The nature of the problem has been drastically changed: we went from a combinatorial to a continuous problem. The difficulties of combinatorial optimization have been replaced by those of non-convex optimization, since the feasible set is non-convex. Nevertheless, a standard numerical solver for constrained optimization such has an augmented Lagrangian method (Bertsekas, 1999) can be applied to get an approximate solution, hence there is no need to design a greedy search procedure. Moreover, this approach is more global than greedy methods in the sense that the whole matrix U is updated at each iteration. Continuous approaches to combinatorial optimization have sometimes demonstrated improved performance over discrete approaches in the literature (see for example Alayrac et al. (2018, §5.2) where they solve the multiple sequence alignment problem with a continuous optimization method)." }, { "heading": "3 GRAN-DAG: GRADIENT-BASED NEURAL DAG LEARNING", "text": "We propose a new nonlinear extension to the framework presented in Section 2.3. For each variable Xj , we learn a fully connected neural network with L hidden layers parametrized by φ(j) , {W (1) (j) , . . . ,W (L+1) (j) } where W (`) (j) is the `th weight matrix of the jth NN (biases are omitted for clarity). Each NN takes as input X−j ∈ Rd, i.e. the vector X with the jth component masked to zero, and outputs θ(j) ∈ Rm, the m-dimensional parameter vector of the desired distribution family\nfor variable Xj .1 The fully connected NNs have the following form\nθ(j) ,W (L+1) (j) g(. . . g(W (2) (j) g(W (1) (j)X−j)) . . . ) ∀j (5)\nwhere g is a nonlinearity applied element-wise. Note that the evaluation of all NNs can be parallelized on GPU. Distribution families need not be the same for each variable. Let φ , {φ(1), . . . , φ(d)} represents all parameters of all d NNs. Without any constraint on its parameter φ(j), neural network j models the conditional pdf pj(xj |x−j ;φ(j)). Note that the product∏d j=1 pj(xj |x−j ;φ(j)) does not integrate to one (i.e. it is not a joint pdf), since it does not decompose according to a DAG. We now show how one can constrain φ to make sure the product of all conditionals outputted by the NNs is a joint pdf. The idea is to define a new weighted adjacency matrixAφ similar to the one encountered in Section 2.3, which can be directly used inside the constraint of Equation 3 to enforce acyclicity." }, { "heading": "3.1 NEURAL NETWORK CONNECTIVITY", "text": "Before defining the weighted adjacency matrix Aφ, we need to focus on how one can make some NN outputs unaffected by some inputs. Since we will discuss properties of a single NN, we drop the NN subscript (j) to improve readability.\nWe will use the term neural network path to refer to a computation path in a NN. For example, in a NN with two hidden layers, the sequence of weights (W (1)h1i,W (2) h2h1 ,W (3) kh2\n) is a NN path from input i to output k. We say that a NN path is inactive if at least one weight along the path is zero. We can loosely interpret the path product |W (1)h1i||W (2) h2h1 ||W (3)kh2 | ≥ 0 as the strength of the NN path, where a path product is equal to zero if and only if the path is inactive. Note that if all NN paths from input i to output k are inactive (i.e. the sum of their path products is zero), then output k does not depend on input i anymore since the information in input i will never reach output k. The sum of all path products from input i to output k for all input i and output k can be easily computed by taking the following matrix product.\nC , |W (L+1)| . . . |W (2)||W (1)| ∈ Rm×d≥0 (6)\nwhere |W | is the element-wise absolute value of W . Let us name C the neural network connectivity matrix. It can be verified that Cki is the sum of all NN path products from input i to output k. This means it is sufficient to have Cki = 0 to render output k independent of input i.\nRemember that each NN in our model outputs a parameter vector θ for a conditional distribution and that we want the product of all conditionals to be a valid joint pdf, i.e. we want its corresponding directed graph to be acyclic. With this in mind, we see that it could be useful to make a certain parameter θ not dependent on certain inputs of the NN. To have θ independent of variable Xi, it is sufficient to have ∑m k=1 Cki = 0." }, { "heading": "3.2 A WEIGHTED ADJACENCY MATRIX", "text": "We now define a weighted adjacency matrix Aφ that can be used in constraint of Equation 3.\n(Aφ)ij ,\n{ ∑m k=1 ( C(j) ) ki , if j 6= i\n0, otherwise (7)\nwhere C(j) denotes the connectivity matrix of the NN associated with variable Xj .\nAs the notation suggests,Aφ ∈ Rd×d≥0 depends on all weights of all NNs. Moreover, it can effectively be interpreted as a weighted adjacency matrix similarly to what we presented in Section 2.3, since we have that (Aφ)ij = 0 =⇒ θ(j) does not depend on variable Xi (8) We note Gφ to be the directed graph entailed by parameter φ. We can now write our adapted acyclicity constraint:\nh(φ) , Tr eAφ − d = 0 (9) 1Not all parameter vectors need to have the same dimensionality, but to simplify the notation, we suppose\nmj = m ∀j\nNote that we can compute the gradient of h(φ) w.r.t. φ (except at points of non-differentiability arising from the absolute value function, similar to standard neural networks with ReLU activations (Glorot et al., 2011); these points did not appear problematic in our experiments using SGD)." }, { "heading": "3.3 A DIFFERENTIABLE SCORE AND ITS OPTIMIZATION", "text": "We propose solving the maximum likelihood optimization problem\nmax φ EX∼PX d∑ j=1 log pj(Xj |Xπφj ;φ(j)) s.t. Tr e Aφ − d = 0 (10)\nwhere πφj denotes the set of parents of node j in graph Gφ. Note that ∑d j=1 log pj(Xj |Xπφj ;φ(j)) is a valid log-likelihood function when constraint (9) is satisfied.\nAs suggested in Zheng et al. (2018), we apply an augmented Lagrangian approach to get an approximate solution to program (10). Augmented Lagrangian methods consist of optimizing a sequence of subproblems for which the exact solutions are known to converge to a stationary point of the constrained problem under some regularity conditions (Bertsekas, 1999). In our case, each subproblem is\nmax φ L(φ, λt, µt) , EX∼PX d∑ j=1 log pj(Xj |Xπφj ;φ(j))− λth(φ)− µt 2 h(φ)2 (11)\nwhere λt and µt are the Lagrangian and penalty coefficients of the tth subproblem, respectively. These coefficients are updated after each subproblem is solved. Since GraN-DAG rests on neural networks, we propose to approximately solve each subproblem using a well-known stochastic gradient algorithm popular for NN in part for its implicit regularizing effect (Poggio et al., 2018). See Appendix A.1 for details regarding the optimization procedure.\nIn the current section, we presented GraN-DAG in a general manner without specifying explicitly which distribution family is parameterized by θ(j). In principle, any distribution family could be employed as long as its log-likelihood can be computed and differentiated with respect to its parameter θ. However, it is not always clear whether the exact solution of problem (10) recovers the ground truth graph G. It will depend on both the modelling choice of GraN-DAG and the underlying CGM (PX ,G).\nProposition 1 Let φ∗ and Gφ∗ be the optimal solution to (10) and its corresponding graph, respectively. Let M(A) be the set of CGM (P ′,G′) for which the assumptions in A are satisfied and let C be the set of CGM (P ′,G′) which can be represented by the model (e.g. NN outputting a Gaussian distribution). If the underlying CGM (PX ,G) ∈ C and C = M(A) for a specific set of assumptions A such that G is identifiable from PX , then Gφ∗ = G.\nProof: Let Pφ be the joint distribution entailed by parameter φ. Note that the population loglikelihood EX∼PX log pφ(X) is maximal iff Pφ = PX . We know this maximum can be achieved by a specific parameter φ∗ since by hypothesis (PX ,G) ∈ C. Since G is identifiable from PX , we know there exists no other CGM (P̃X , G̃) ∈ C such that G̃ 6= G and P̃X = PX . Hence Gφ∗ has to be equal to G. In Section 4.1, we empirically explore the identifiable setting of nonlinear Gaussian ANMs introduced in Section 2.2. In practice, one should keep in mind that solving (10) exactly is hard since the problem is non-convex (the augmented Lagrangian converges only to a stationary point) and moreover we only have access to the empirical log-likelihood (Proposition 1 holds for the population case)." }, { "heading": "3.4 THRESHOLDING", "text": "The solution outputted by the augmented Lagrangian will satisfy the constraint only up to numerical precision, thus several entries of Aφ might not be exactly zero and require thresholding. To do so, we mask the inputs of each NN j using a binary matrix M(j) ∈ {0, 1}d×d initialized to have (M(j))ii = 1 ∀i 6= j and zeros everywhere else. Having (M(j))ii = 0 means the input i of NN\nj has been thresholded. This mask is integrated in the product of Equation 6 by doing C(j) , |W (L+1)(j) | . . . |W (1) (j) |M(j) without changing the interpretation of C(j) (M(j) can be seen simply as an extra layer in the NN). During optimization, if the entry (Aφ)ij is smaller than the threshold = 10−4, the corresponding mask entry (M(j))ii is set to zero, permanently. The masks M(j) are never updated via gradient descent. We also add an iterative thresholding step at the end to ensure the estimated graph Gφ is acyclic (described in Appendix A.2)." }, { "heading": "3.5 OVERFITTING", "text": "In practice, we maximize an empirical estimate of the objective of problem (10). It is well known that this maximum likelihood score is prone to overfitting in the sense that adding edges can never reduce the maximal likelihood (Koller & Friedman, 2009). GraN-DAG gets around this issue in four ways. First, as we optimize a subproblem, we evaluate its objective on a held-out data set and declare convergence once it has stopped improving. This approach is known as early stopping (Prechelt, 1997). Second, to optimize (11), we use a stochastic gradient algorithm variant which is now known to have an implicit regularizing effect (Poggio et al., 2018). Third, once we have thresholded our graph estimate to be a DAG, we apply a final pruning step identical to what is done in CAM (Bühlmann et al., 2014) to remove spurious edges. This step performs a regression of each node against its parents and uses a significance test to decide which parents should be kept or not. Fourth, for graphs of 50 nodes or more, we apply a preliminary neighbors selection (PNS) before running the optimization procedure as was also recommended in Bühlmann et al. (2014). This procedure selects a set of potential parents for each variables. See Appendix A.3 for details on PNS and pruning. Many score-based approaches control overfitting by penalizing the number of edges in their score. For example, NOTEARS includes the L1 norm of its weighted adjacency matrix U in its objective. GraN-DAG regularizes using PNS and pruning for ease of comparision to CAM, the most competitive approach in our experiments. The importance of PNS and pruning and their ability to reduce overfitting is illustrated in an ablation study presented in Appendix A.3. The study shows that PNS and pruning are both very important for the performance of GraN-DAG in terms of SHD, but do not have a significant effect in terms of SID. In these experiments, we also present NOTEARS and DAG-GNN with PNS and pruning, without noting a significant improvement." }, { "heading": "3.6 COMPUTATIONAL COMPLEXITY", "text": "To learn a graph, GraN-DAG relies on the proper training of neural networks on which it is built. We thus propose using a stochastic gradient method which is a standard choice when it comes to NN training because it scales well with both the sample size and the number of parameters and it implicitly regularizes learning. Similarly to NOTEARS, GraN-DAG requires the evaluation of the matrix exponential of Aφ at each iteration costing O(d3). NOTEARS justifies the use of a batch proximal quasi-Newton algorithm by the low number of O(d3) iterations required to converge. Since GraNDAG uses a stochastic gradient method, one would expect it will require more iterations to converge. However, in practice we observe that GraN-DAG performs fewer iterations than NOTEARS before the augmented Lagrangian converges (see Table 4 of Appendix A.1). We hypothesize this is due to early stopping which avoids having to wait until the full convergence of the subproblems hence limiting the total number of iterations. Moreover, for the graph sizes considered in this paper (d ≤ 100), the evaluation of h(φ) in GraN-DAG, which includes the matrix exponentiation, does not dominate the cost of each iteration (≈ 4% for 20 nodes and ≈ 13% for 100 nodes graphs). Evaluating the approximate gradient of the log-likelihood (costing O(d2) assuming a fixed minibatch size, NN depth and width) appears to be of greater importance for d ≤ 100." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we compare GraN-DAG to various baselines in the continuous paradigm, namely DAG-GNN (Yu et al., 2019) and NOTEARS (Zheng et al., 2018), and also in the combinatorial paradigm, namely CAM (Bühlmann et al., 2014), GSF (Huang et al., 2018), GES (Chickering, 2003) and PC (Spirtes et al., 2000). These methods are discussed in Section 5. In all experiments, each NN learned by GraN-DAG outputs the mean of a Gaussian distribution µ̂(j), i.e. θ(j) := µ̂(j) and Xj |XπGj ∼ N (µ̂(j), σ̂ 2 (j)) ∀j. The parameters σ̂ 2 (j) are learned as well, but do not depend on\nthe parent variables XπGj (unless otherwise stated). Note that this modelling choice matches the nonlinear Gaussian ANM introduced in Section 2.2.\nWe report the performance of random graphs sampled using the Erdős-Rényi (ER) scheme described in Appendix A.5 (denoted by RANDOM). For each approach, we evaluate the estimated graph on two metrics: the structural hamming distance (SHD) and the structural interventional distance (SID) (Peters & Bühlmann, 2013). The former simply counts the number of missing, falsely detected or reversed edges. The latter is especially well suited for causal inference since it counts the number of couples (i, j) such that the interventional distribution p(xj |do(Xi = x̄)) would be miscalculated if we were to use the estimated graph to form the parent adjustement set. Note that GSF, GES and PC output only a CPDAG, hence the need to report a lower and an upper bound on the SID. See Appendix A.7 for more details on SHD and SID. All experiments were ran with publicly available code from the authors website. See Appendix A.8 for the details of their hyperparameters. In Appendix A.9, we explain how one could use a held-out data set to select the hyperparameters of score-based approaches and report the results of such a procedure on almost every settings discussed in the present section." }, { "heading": "4.1 SYNTHETIC DATA", "text": "We have generated different data set types which vary along four dimensions: data generating process, number of nodes, level of edge sparsity and graph type. We consider two graph sampling schemes: Erdős-Rényi (ER) and scale-free (SF) (see Appendix A.5 for details). For each data set type, we sampled 10 data sets of 1000 examples as follows: First, a ground truth DAG G is randomly sampled following either the ER or the SF scheme. Then, the data is generated according to a specific sampling scheme.\nThe first data generating process we consider is the nonlinear Gaussian ANM (Gauss-ANM) introduced in Section 2.2 in which data is sampled following Xj := fj(XπGj ) + Nj with mutually independent noises Nj ∼ N (0, σ2j ) ∀j where the functions fj are independently sampled from a Gaussian process with a unit bandwidth RBF kernel and with σ2j sampled uniformly. As mentioned in Section 2.2, we know G to be identifiable from the distribution. Proposition 1 indicates that the modelling choice of GraN-DAG together with this synthetic data ensure that solving (10) to optimality would recover the correct graph. Note that NOTEARS and CAM also make the correct Gaussian noise assumption, but do not have enough capacity to represent the fj functions properly.\nWe considered graphs of 10, 20, 50 and 100 nodes. Tables 1 & 2 present results only for 10 and 50 nodes since the conclusions do not change with graphs of 20 or 100 nodes (see Appendix A.6 for these additional experiments). We consider graphs of d and 4d edges (respectively denoted by ER1 and ER4 in the case of ER graphs). We report the performance of the popular GES and PC in Appendix A.6 since they are almost never on par with the best methods presented in this section.\nWe now examine Tables 1 & 2 (the errors bars represent the standard deviation across datasets per task). We can see that, across all settings, GraN-DAG and CAM are the best performing methods, both in terms of SHD and SID, while GSF is not too far behind. The poor performance of NOTEARS can be explained by its inability to model nonlinear functions. In terms of SHD, DAG-GNN performs rarely better than NOTEARS while in terms of SID, it performs similarly to RANDOM in almost all cases except in scale-free networks of 50 nodes or more. Its poor performance might be due to its incorrect modelling assumptions and because its architecture uses a strong form of parameter sharing between the fj functions, which is not justified in a setup like ours. GSF performs always better than DAG-GNN and NOTEARS but performs as good as CAM and GraN-DAG only about half the time. Among the continuous approaches considered, GraN-DAG is the best performing on these synthetic tasks.\nEven though CAM (wrongly) assumes that the functions fj are additive, i.e. fj(xπGj ) = ∑ i∈πGj\nfij(xj) ∀j, it manages to compete with GraN-DAG which does not make this incorrect modelling assumption2. This might partly be explained by a bias-variance trade-off. CAM is biased but has a lower variance than GraN-DAG due to its restricted capacity, resulting in both methods performing similarly. In Appendix A.4, we present an experiment showing that GraN-DAG can outperform CAM in higher sample size settings, suggesting this explanation is reasonable.\nHaving confirmed that GraN-DAG is competitive on the ideal Gauss-ANM data, we experimented with settings better adjusted to other models to see whether GraN-DAG remains competitive. We considered linear Gaussian data (better adjusted to NOTEARS) and nonlinear Gaussian data with additive functions (better adjusted to CAM) named LIN and ADD-FUNC, respectively. See Appendix A.5 for the details of their generation. We report the results of GraN-DAG and other baselines in Table 12 & 13 of the appendix. On linear Gaussian data, most methods score poorly in terms of SID which is probably due to the unidentifiability of the linear Gaussian model (when the noise variances are unequal). GraN-DAG and CAM perform similarly to NOTEARS in terms of SHD. On ADD-FUNC, CAM dominates all methods on most graph types considered (GraN-DAG is on par only for the 10 nodes ER1 graph). However, GraN-DAG outperforms all other methods which can be explained by the fact that the conditions of Proposition 1 are satisfied (supposing the functions∑ i∈πGj fij(Xi) can be represented by the NNs).\nWe also considered synthetic data sets which do not satisfy the additive Gaussian noise assumption present in GraN-DAG, NOTEARS and CAM. We considered two kinds of post nonlinear causal models (Zhang & Hyvärinen, 2009), PNL-GP and PNL-MULT (see Appendix A.5 for details about their generation). A post nonlinear model has the form Xj := gj(fj(XπGj ) + Nj) where Nj is a noise variable. Note that GraN-DAG (with the current modelling choice) and CAM do not have the representational power to express these conditional distributions, hence violating an assumption of Proposition 1. However, these data sets differ from the previous additive noise setup only by the nonlinearity gj , hence offering a case of mild model misspecification. The results are reported in Table 14 of the appendix. GraN-DAG and CAM are outperforming DAG-GNN and NOTEARS except in SID for certain data sets where all methods score similarly to RANDOM. GraN-DAG and CAM have similar performance on all data sets except one where CAM is better. GSF performs worst than GraN-DAG (in both SHD and SID) on PNL-GP but not on PNL-MULT where it performs better in SID." }, { "heading": "4.2 REAL AND PSEUDO-REAL DATA", "text": "We have tested all methods considered so far on a well known data set which measures the expression level of different proteins and phospholipids in human cells (Sachs et al., 2005). We trained only on the n = 853 observational samples. This dataset and its ground truth graph proposed in Sachs et al. (2005) (11 nodes and 17 edges) are often used in the probabilistic graphical model literature (Koller & Friedman, 2009). We also consider pseudo-real data sets sampled from the SynTReN generator (Van den Bulcke, 2006). This generator was designed to create synthetic transcriptional regulatory networks and produces simulated gene expression data that approximates experimental data. See Appendix A.5 for details of the generation.\n2Although it is true that GraN-DAG does not wrongly assume that the functions fj are additive, it is not clear whether its neural networks can exactly represent functions sampled from the Gaussian process.\nIn applications, it is not clear whether the conditions of Proposition 1 hold since we do not know whether (PX ,G) ∈ C. This departure from identifiable settings is an occasion to explore a different modelling choice for GraN-DAG. In addition to the model presented at the beginning of this section, we consider an alternative, denoted GraN-DAG++, which allows the variance parameters σ̂2(i) to depend on the parent variables XπGi through the NN, i.e. θ(i) := (µ̂(i), log σ̂ 2 (i)). Note that this is violating the additive noise assumption (in ANMs, the noise is independent of the parent variables).\nIn addition to metrics used in Section 4.1, we also report SHD-C. To compute the SHD-C between two DAGs, we first map each of them to their corresponding CPDAG and measure the SHD between the two. This metric is useful to compare algorithms which only outputs a CPDAG like GSF, GES and PC to other methods which outputs a DAG. Results are reported in Table 3.\nFirst, all methods perform worse than what was reported for graphs of similar size in Section 4.1, both in terms of SID and SHD. This might be due to the lack of identifiability guarantees we face in applications. On the protein data set, GraN-DAG outperforms CAM in terms of SID (which differs from the general trend of Section 4.1) and arrive almost on par in terms of SHD and SHD-C. On this data set, DAG-GNN has a reasonable performance, beating GraN-DAG in SID, but not in SHD. On SynTReN, GraN-DAG obtains the best SHD but not the best SID. Overall, GraN-DAG is always competitive with the best methods of each task." }, { "heading": "5 RELATED WORK", "text": "Most methods for structure learning from observational data make use of some identifiability results similar to the ones raised in Section 2.2. Roughly speaking, there are two classes of methods: independence-based and score-based methods. GraN-DAG falls into the second class.\nScore-based methods (Koller & Friedman, 2009; Peters et al., 2017) cast the problem of structure learning as an optimization problem over the space of structures (DAGs or CPDAGs). Many popular algorithms tackle the combinatorial nature of the problem by performing a form of greedy search. GES (Chickering, 2003) is a popular example. It usually assumes a linear parametric model with Gaussian noise and greedily search the space of CPDAGs in order to optimize the Bayesian information criterion. GSF (Huang et al., 2018), is based on the same search algorithm as GES, but uses a generalized score function which can model nonlinear relationships. Other greedy approaches rely on parametric assumptions which render G fully identifiable. For example, Peters & Bühlman (2014) relies on a linear Gaussian model with equal variances to render the DAG identifiable. RESIT (Peters et al., 2014), assumes nonlinear relationships with additive Gaussian noise and greedily maximizes an independence-based score. However, RESIT does not scale well to graph of more than 20 nodes. CAM (Bühlmann et al., 2014) decouples the search for the optimal node ordering from the parents selection for each node and assumes an additive noise model (ANM) (Peters et al., 2017) in which the nonlinear functions are additive. As mentioned in Section 2.3, NOTEARS, proposed in Zheng et al. (2018), tackles the problem of finding an optimal DAG as a continuous constrained optimization program. This is a drastic departure from previous combinatorial approaches which enables the application of well studied numerical solvers for continuous optimizations. Recently, Yu et al. (2019) proposed DAG-GNN, a graph neural network architecture (GNN) which can be used to learn DAGs via the maximization of an evidence lower bound. By design, a GNN makes use of parameter sharing which we hypothesize is not well suited for most DAG learning tasks. To the best of our knowledge, DAG-GNN is the first approach extending the NOTEARS algorithm for structure\nlearning to support nonlinear relationships. Although Yu et al. (2019) provides empirical comparisons to linear approaches, namely NOTEARS and FGS (a faster extension of GES) (Ramsey et al., 2017), comparisons to greedy approaches supporting nonlinear relationships such as CAM and GSF are missing. Moreover, GraN-DAG significantly outperforms DAG-GNN on our benchmarks. There exists certain score-based approaches which uses integer linear programming (ILP) (Jaakkola et al., 2010; Cussens, 2011) which internally solve continuous linear relaxations. Connections between such methods and the continuous constrained approaches are yet to be explored.\nWhen used with the additive Gaussian noise assumption, the theoretical guarantee of GraN-DAG rests on the identifiability of nonlinear Gaussian ANMs. Analogously to CAM and NOTEARS, this guarantee holds only if the correct identifiability assumptions hold in the data and if the score maximization problem is solved exactly (which is not the case in all three algorithms). DAG-GNN provides no theoretical justification for its approach. NOTEARS and CAM are designed to handle what is sometimes called the high-dimensional setting in which the number of samples is significantly smaller than the number of nodes. Bühlmann et al. (2014) provides consistency results for CAM in this setting. GraN-DAG and DAG-GNN were not designed with this setting in mind and would most likely fail if confronted to it. Solutions for fitting a neural network on less data points than input dimensions are not common in the NN literature.\nMethods for causal discovery using NNs have already been proposed. SAM (Kalainathan et al., 2018) learns conditional NN generators using adversarial losses but does not enforce acyclicity. CGNN (Goudet et al., 2018), when used for multivariate data, requires an initial skeleton to learn the different functional relationships.\nGraN-DAG has strong connections with MADE (Germain et al., 2015), a method used to learn distributions using a masked NN which enforces the so-called autoregressive property. The autoregressive property and acyclicity are in fact equivalent. MADE does not learn the weight masking, it fixes it at the beginning of the procedure. GraN-DAG could be used with a unique NN taking as input all variables and outputting parameters for all conditional distributions. In this case, it would be similar to MADE, except the variable ordering would be learned from data instead of fixed a priori." }, { "heading": "6 CONCLUSION", "text": "The continuous constrained approach to structure learning has the advantage of being more global than other approximate greedy methods (since it updates all edges at each step based on the gradient of the score but also the acyclicity constraint) and allows to replace task-specific greedy algorithms by appropriate off-the-shelf numerical solvers. In this work, we have introduced GraN-DAG, a novel score-based approach for structure learning supporting nonlinear relationships while leveraging a continuous optimization paradigm. The method rests on a novel characterization of acyclicity for NNs based on the work of Zheng et al. (2018). We showed GraN-DAG outperforms other gradient-based approaches, namely NOTEARS and its recent nonlinear extension DAG-GNN, on the synthetic data sets considered in Section 4.1 while being competitive on real and pseudo-real data sets of Section 4.2. Compared to greedy approaches, GraN-DAG is competitive across all datasets considered. To the best of our knowledge, GraN-DAG is the first approach leveraging the continuous paradigm introduced in Zheng et al. (2018) which has been shown to be competitive with state of the art methods supporting nonlinear relationships." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was partially supported by the Canada CIFAR AI Chair Program and by a Google Focused Research award. The authors would like to thank Rémi Le Priol, Tatjana Chavdarova, Charles Guille-Escuret, Nicolas Gagné and Yoshua Bengio for insightful discussions as well as Alexandre Drouin and Florian Bordes for technical support. The experiments were in part enabled by computational resources provided by Calcul Québec, Compute Canada and Element AI." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 OPTIMIZATION", "text": "Let us recall the augmented Lagrangian:\nmax φ L(φ, λt, µt) , EX∼PX d∑ i=1 log pi(Xi|Xπφi ;φ(i))− λth(φ)− µt 2 h(φ)2 (12)\nwhere λt and µt are the Lagrangian and penalty coefficients of the tth subproblem, respectively. In all our experiments, we initialize those coefficients using λ0 = 0 and µ0 = 10−3. We approximately solve each non-convex subproblem using RMSprop (Tieleman & Hinton, 2012), a stochastic gradient descent variant popular for NNs. We use the following gradient estimate:\n∇φL(φ, λt, µt) ≈ ∇φL̂B(φ, λt, µt)\nwith L̂B(φ, λt, µt) , 1 |B| ∑ x∈B d∑ i=1 log pi(xi|xπφi ;φ(i))− λth(φ)− µt 2 h(φ)2\n(13)\nwhere B is a minibatch sampled from the data set and |B| is the minibatch size. The gradient estimate ∇φL̂B(φ, λt, µt) can be computed using standard deep learning libraries. We consider a subproblem has converged when L̂H(φ, λt, µt) evaluated on a held-out data set H stops increasing. Let φ∗t be the approximate solution to subproblem t. Then, λt and µt are updated according to the following rule:\nλt+1 ← λt + µth (φ∗t ) µt+1 ← { ηµt, if h (φ∗t ) > γh ( φ∗t−1 ) µt, otherwise\n(14)\nwith η = 10 and γ = 0.9. Each subproblem t is initialized using the previous subproblem solution φ∗t−1. The augmented Lagrangian method stops when h(φ) ≤ 10−8. Total number of iterations before augmented Lagrangian converges: In GraN-DAG and NOTEARS, every subproblem is approximately solved using an iterative algorithm. Let T be the number of subproblems solved before the convergence of the augmented Lagrangian. For a given subproblem t, let Kt be the number of iterations executed to approximately solve it. Let I = ∑T t=1Kt be the total number of iterations before the augmented Lagrangian converges. Table 4 reports the total number of iterations I for GraN-DAG and NOTEARS, averaged over ten data sets. Note that the matrix exponential is evaluated once per iteration. Even though GraN-DAG uses a stochastic gradient algorithm, it requires less iterations than NOTEARS which uses a batch proximal quasi-Newton method. We hypothesize early stopping avoids having to wait until full convergence before moving to the next subproblem, hence reducing the total number of iterations. Note that GraN-DAG total run time is still larger than NOTEARS due to its gradient requiring more computation to evaluate (total runtime ≈ 10 minutes against ≈ 1 minute for 20 nodes graphs and ≈ 4 hours against ≈ 1 hour for 100 nodes graphs). GraN-DAG runtime on 100 nodes graphs can be roughly halved when executed on GPU." }, { "heading": "A.2 THRESHOLDING TO ENSURE ACYCLICITY", "text": "The augmented Lagrangian outputs φ∗T where T is the number of subproblems solved before declaring convergence. Note that the weighted adjacency matrix Aφ∗T will most likely not represent an acyclic graph, even if we threshold as we learn, as explained in Section 3.4. We need to remove\nadditional edges to obtain a DAG (edges are removed using the mask presented in Section 3.4). One option would be to remove edges one by one until a DAG is obtained, starting from the edge (i, j) with the lowest (Aφ∗T )ij up to the edge with the highest (Aφ∗T )ij . This amounts to gradually increasing the threshold untilAφ∗T is acyclic. However, this approach has the following flaw: It is possible to have (Aφ∗T )ij significantly higher than zero while having θ(j) almost completely independent of variable Xi. This can happen for at least two reasons. First, the NN paths from input i to output k might end up cancelling each others, rendering the input i inactive. Second, some neurons of the NNs might always be saturated for the observed range of inputs, rendering some NN paths effectively inactive without being inactive in the sense described in Section 3.1. Those two observations illustrate the fact that having (Aφ∗T )ij = 0 is only a sufficient condition to have θ(j) independent of variable Xi and not a necessary one.\nTo avoid this issue, we consider the following alternative. Consider the functionL : Rd 7→ Rd which maps all d variables to their respective conditional likelihoods, i.e. Li(X) , pi(Xi | X\nπ φ∗ T i ) ∀i. We consider the following expected Jacobian matrix\nJ , EX∼PX ∣∣∣∣ ∂L∂X ∣∣∣∣> (15) where ∣∣ ∂L ∂X\n∣∣ is the Jacobian matrix of L evaluated at X , in absolute value (element-wise). Similarly to (Aφ∗T )ij , the entry Jij can be loosely interpreted as the strength of edge (i, j). We propose removing edges starting from the lowestJij to the highest, stopping as soon as acyclicity is achieved. We believe J is better than Aφ∗T at capturing which NN inputs are effectively inactive since it takes into account NN paths cancelling each others and saturated neurons. Empirically, we found that using J instead of Aφ∗T yields better results, and thus we report the results with J in this paper." }, { "heading": "A.3 PRELIMINARY NEIGHBORHOOD SELECTION AND DAG PRUNING", "text": "PNS: For graphs of 50 nodes or more, GraN-DAG performs a preliminary neighborhood selection (PNS) similar to what has been proposed in Bühlmann et al. (2014). This procedure applies a variable selection method to get a set of possible parents for each node. This is done by fitting an extremely randomized trees (Geurts et al., 2006) (using ExtraTreesRegressor from scikit-learn) for each variable against all the other variables. For each node a feature importance score based on the gain of purity is calculated. Only nodes that have a feature importance score higher than 0.75 · mean are kept as potential parent, where mean is the mean of the feature importance scores of all nodes. Although the use of PNS in CAM was motivated by gains in computation time, GraN-DAG uses it to avoid overfitting, without reducing the computation time.\nPruning: Once the thresholding is performed and a DAG is obtained as described in A.2, GraNDAG performs a pruning step identical to CAM (Bühlmann et al., 2014) in order to remove spurious edges. We use the implementation of Bühlmann et al. (2014) based on the R function gamboost from the mboost package. For each variable Xi, a generalized additive model is fitted against the current parents of Xi and a significance test of covariates is applied. Parents with a p-value higher than 0.001 are removed from the parent set. Similarly to what Bühlmann et al. (2014) observed, this pruning phase generally has the effect of greatly reducing the SHD without considerably changing the SID.\nAblation study: In Table 5, we present an ablation study which shows the effect of adding PNS and pruning to GraN-DAG on different performance metrics and on the negative log-likelihood (NLL) of the training and validation set. Note that, before computing both NLL, we reset all parameters of GraN-DAG except the mask and retrained the model on the training set without any acyclicity constraint (acyclicity is already ensure by the masks at this point). This retraining procedure is important since the pruning removes edges (i.e. some additional NN inputs are masked) and it affects the likelihood of the model (hence the need to retrain).\nA first observation is that adding PNS and pruning improve the NLL on the validation set while deteriorating the NLL on the training set, showing that those two steps are indeed reducing overfitting. Secondly, the effect on SHD is really important while the effect on SID is almost nonexistent. This can be explained by the fact that SID has more to do with the ordering of the nodes than with false positive edges. For instance, if we have a complete DAG with a node ordering coherent with the ground truth graph, the SID is zero, but the SHD is not due to all the false positive edges. Without the regularizing effect of PNS and pruning, GraN-DAG manages to find a DAG with a good ordering but with many spurious edges (explaining the poor SHD, the good SID and the big gap between the NLL of the training set and validation set). PNS and pruning helps reducing the number of spurious edges, hence improving SHD.\nWe also implemented PNS and pruning for NOTEARS and DAG-GNN to see whether their performance could also be improved. Table 6 reports an ablation study for DAG-GNN and NOTEARS. First, the SHD improvement is not as important as for GraN-DAG and is almost not statistically significant. The improved SHD does not come close to performance of GraN-DAG. Second, PNS and pruning do not have a significant effect of SID, as was the case for GraN-DAG. The lack of improvement for those methods is probably due to the fact that they are not overfitting like GraNDAG, as the training and validation (unregularized) scores shows. NOTEARS captures only linear relationships, thus it will have a hard time overfitting nonlinear data and DAG-GNN uses a strong form of parameter sharing between its conditional densities which possibly cause underfitting in a setup where all the parameters of the conditionals are sampled independently.\nMoreover, DAG-GNN and NOTEARS threshold aggressively their respective weighted adjacency matrix at the end of training (with the default parameters used in the code), which also acts as a form of heavy regularization, and allow them to remove many spurious edges. GraN-DAG without PNS and pruning does not threshold as strongly by default which explains the high SHD of Table 5. To test this explanation, we removed all edges (i, j) for which (Aφ)ij < 0.33 for GraN-DAG and obtained an SHD of 29.4±15.9 and an SID of 85.6±45.7, showing a significant improvement over NOTEARS and DAG-GNN, even without PNS and pruning." }, { "heading": "A.4 LARGE SAMPLE SIZE EXPERIMENT", "text": "In this section, we test the bias-variance hypothesis which attempts to explain why CAM is on par with GraN-DAG on Gauss-ANM data even if its model wrongly assumes that the fj functions are additive. Table 7 reports the performance of GraN-DAG and CAM for different sample sizes. We can see that, as the sample size grows, GraN-DAG ends up outperforming CAM in terms of SID while staying on par in terms of SHD. We explain this observation by the fact that a larger sample size reduces variance for GraN-DAG thus allowing it to leverage its greater capacity against CAM which is stuck with its modelling bias. Both algorithms were run with their respective default hyperparameter combination.\nThis experiment suggests GraN-DAG could be an appealing option in settings where the sample size is substantial. The present paper focuses on sample sizes typically encountered in the structure/causal learning litterature and leave this question for future work." }, { "heading": "500 CAM 123.5 ± 13.9 1181.2 ± 160.8", "text": "" }, { "heading": "A.5 DETAILS ON DATA SETS GENERATION", "text": "Synthetic data sets: For each data set type, 10 data sets are sampled with 1000 examples each. As the synthetic data introduced in Section 4.1, for each data set, a ground truth DAG G is randomly sampled following the ER scheme and then the data is generated. Unless otherwise stated, all root variables are sampled from U [−1, 1].\n• Gauss-ANM is generated following Xj := fj(XπGj ) + Nj ∀j with mutually independent noises Nj ∼ N (0, σ2j ) ∀j where the functions fj are independently sampled from a Gaussian process with a unit bandwidth RBF kernel and σ2j ∼ U [0.4, 0.8]. Source nodes are Gaussian with zero mean and variance sampled from U [1, 2]\n• LIN is generated following Xj |XπGj ∼ w T j XπGj + 0.2 · N (0, σ2j ) ∀j where σ2j ∼ U [1, 2] and wj is a vector of |πGj | coefficients each sampled from U [0, 1].\n• ADD-FUNC is generated following Xj |XπGj ∼ ∑ i∈πGj\nfj,i(Xi) + 0.2 · N (0, σ2j ) ∀j where σ2j ∼ U [1, 2] and the functions fj,i are independently sampled from a Gaussian process with bandwidth one. This model is adapted from Bühlmann et al. (2014).\n• PNL-GP is generated following Xj |XπGj ∼ σ(fj(XπGj ) + Laplace(0, lj)) ∀j with the functions fj independently sampled from a Gaussian process with bandwidth one and lj ∼ U [0, 1]. In the two-variable case, this model is identifiable following the Corollary 9 from Zhang & Hyvärinen (2009). To get identifiability according to this corollary, it is important to use non-Gaussian noise, explaining our design choices.\n• PNL-MULT is generated following Xj |XπGj ∼ exp(log( ∑ i∈πGj\nXi) + |N (0, σ2j )|) ∀j where σ2j ∼ U [0, 1]. Root variables are sampled from U [0, 2]. This model is adapted from Zhang et al. (2015).\nSynTReN: Ten datasets have been generated using the SynTReN generator (http: //bioinformatics.intec.ugent.be/kmarchal/SynTReN/index.html) using the\nsoftware default parameters except for the probability for complex 2-regulator interactions that was set to 1 and the random seeds used were 0 to 9. Each dataset contains 500 samples and comes from a 20 nodes graph.\nGraph types: Erdős-Rényi (ER) graphs are generated by randomly sampling a topological order and by adding directed edges were it is allowed independently with probability p = 2ed2−d were e is the expected number of edges in the resulting DAG. Scale-free (SF) graphs were generated using the Barabási-Albert model (Barabási & Albert, 1999) which is based on preferential attachment. Nodes are added one by one. Between the new node and the existing nodes, m edges (where m is equal to d or 4d) will be added. An existing node i have the probability p(ki) = ki∑\nj kj to be chosen,\nwhere ki represents the degree of the node i. While ER graphs have a degree distribution following a Poisson distribution, SF graphs have a degree distribution following a power law: few nodes, often called hubs, have a high degree. Barabási (2009) have stated that these types of graphs have similar properties to real-world networks which can be found in many different fields, although these claims remain controversial (Clauset et al., 2009)." }, { "heading": "A.6 SUPPLEMENTARY EXPERIMENTS", "text": "Gauss-ANM: The results for 20 and 100 nodes are presented in Table 8 and 9 using the same GaussANM data set types introduced in Section 4.1. The conclusions drawn remains similar to the 10 and 50 nodes experiments. For GES and PC, the SHD and SID are respectively presented in Table 10 and 11. Their performances do not compare favorably to the GraN-DAG nor CAM. Figure 1 shows the entries of the weighted adjacency matrix Aφ as training proceeds in a typical run for 10 nodes.\nLIN & ADD-FUNC: Experiments with LIN and ADD-FUNC data is reported in Table 12 & 13. The details of their generation are given in Appendix A.5.\nPNL-GP & PNL-MULT: Table 14 contains the performance of GraN-DAG and other baselines on post nonlinear data discussed in Section 4.1.\n4Note that GSF results are missing for two data set types in Tables 9 and 14. This is because the search algorithm could not finish within 12 hours, even when the maximal in-degree was limited to 5. All other methods could run in less than 6 hours." }, { "heading": "A.7 METRICS", "text": "SHD takes two partially directed acyclic graphs (PDAG) and counts the number of edge for which the edge type differs in both PDAGs. There are four edge types: i ← j, i → j, i −− j and i j. Since this distance is defined over the space of PDAGs, we can use it to compare DAGs with DAGs, DAGs with CPDAGs and CPDAGs with CPDAGs. When comparing a DAG with a CPDAG, having i← j instead of i −− j counts as a mistake. SHD-C is very similar to SHD. The only difference is that both DAGs are first mapped to their respective CPDAGs before measuring the SHD.\nIntroduced by Peters & Bühlmann (2015), SID counts the number of interventional distribution of the form p(xi| do(xj = x̂j)) that would be miscalculated using the parent adjustment formula (Pearl, 2009) if we were to use the predicted DAG instead of the ground truth DAG to form the parent adjustment set. Some care needs to be taken to evaluate the SID for methods outputting a CPDAG such as GES and PC. Peters & Bühlmann (2015) proposes to report the SID of the DAGs which have approximately the minimal and the maximal SID in the Markov equivalence class given by the CPDAG. See Peters & Bühlmann (2015) for more details." }, { "heading": "A.8 HYPERPARAMETERS", "text": "All GraN-DAG runs up to this point were performed using the following set of hyperparameters. We used RMSprop as optimizer with learning rate of 10−2 for the first subproblem and 10−4 for all subsequent suproblems. Each NN has two hidden layers with 10 units (except for the real and pseudo-real data experiments of Table 3 which uses only 1 hidden layer). Leaky-ReLU is used as activation functions. The NN are initialized using the initialization scheme proposed in Glorot & Bengio (2010) also known as Xavier initialization. We used minibatches of 64 samples. This hyperparameter combination have been selected via a small scale experiment in which many hyperparameter combinations have been tried manually on a single data set of type ER1 with 10 nodes until one yielding a satisfactory SHD was obtained. Of course in practice one cannot select hyperparameters in this way since we do not have access to the ground truth DAG. In Appendix A.9, we explain how one could use a held-out data set to select the hyperparameters of score-based approaches and report the results of such a procedure on almost settings presented in this paper.\nFor NOTEARS, DAG-GNN, and GSF, we used the default hyperparameters found in the authors code. It (rarely) happens that NOTEARS and DAG-GNN returns a cyclic graph. In those cases, we removed edges starting from the weaker ones to the strongest (according to their respective weighted adjacency matrices), stopping as soon as acyclicity is achieved (similarly to what was explained in Appendix A.2 for GraN-DAG). For GES and PC, we used default hyperparameters of the pcalg R package. For CAM, we used the the default hyperparameters found in the CAM R package, with default PNS and DAG pruning." }, { "heading": "A.9 HYPERPARAMETER SELECTION VIA HELD-OUT SCORE", "text": "Most structure/causal learning algorithms have hyperparameters which must be selected prior to learning. For instance, NOTEARS and GES have a regularizing term in their score controlling the sparsity level of the resulting graph while CAM has a thresholding level for its pruning phase (also controlling the sparsity of the DAG). GraN-DAG and DAG-GNN have many hyperparameters such as the learning rate and the architecture choice for the neural networks (i.e. number of hidden layers and hidden units per layer). One approach to selecting hyperparameters in practice consists in trying multiple hyperparameter combinations and keeping the one yielding the best score evaluated on a held-out set (Koller & Friedman, 2009, p. 960). By doing so, one can hopefully avoid finding a DAG which is too dense or too sparse since if the estimated graph contains many spurious edges, the score on the held-out data set should be penalized. In the section, we experiment with this approach on almost all settings and all methods covered in the present paper.\nExperiments: We explored multiple hyperparameter combinations using random search (Bergstra & Bengio, 2012). Table 15 to Table 23 report results for each dataset types. Each table reports the SHD and SID averaged over 10 data sets and for each data set, we tried 50 hyperparameter combinations sampled randomly (see Table 24 for sampling schemes). The hyperparameter combination yielding the best held-out score among all 50 runs is selected per data set (i.e. the average of SHD and SID scores correspond to potentially different hyperparameter combinations on different data sets). 80% of the data was used for training and 20% was held out (GraN-DAG uses the same data for early stopping and hyperparameter selection). Note that the held-out score is always evaluated without the regularizing term (e.g. the held-out score of NOTEARS was evaluated without its L1 regularizer).\nThe symbols ++ and + indicate the hyperparameter search improved performance against default hyperparameter runs above one standard deviation and within one standard deviation, respectively. Analogously for −− and − which indicate a performance reduction. The flag ∗∗∗ indicate that, on average, less than 10 hyperparameter combinations among the 50 tried allowed the method to\nconverge in less than 12 hours. Analogously, ∗∗ indicates between 10 and 25 runs converged and ∗ indicates between 25 and 45 runs converged.\nDiscussion: GraN-DAG and DAG-GNN are the methods benefiting the most from the hyperparameter selection procedure (although rarely significantly). This might be explained by the fact that neural networks are in general very sensitive to the choice of hyperparameters. However, not all methods improved their performance and no method improves its performance in all settings. GES and GSF for instance, often have significantly worse results. This might be due to some degree of model misspecification which renders the held-out score a poor proxy for graph quality. Moreover, for some methods the gain from the hyperparameter tuning might be outweighed by the loss due to the 20% reduction in training samples.\nAdditional implementation details for held-out score evaluation: GraN-DAG makes use of a final pruning step to remove spurious edges. One could simply mask the inputs of the NN corresponding to removed edges and evaluate the held-out score. However, doing so yields an unrepresentative score since some masked inputs have an important role in the learned function and once these inputs are masked, the quality of the fit might greatly suffer. To avoid this, we retrained the whole model from scratch on the training set with the masking fixed to the one recovered after pruning. Then, we evaluate the held-out score with this retrained architecture. During this retraining phase, the estimated graph is fixed, only the conditional densities are relearned. Since NOTEARS and DAG-GNN are not always guaranteed to return a DAG (although they almost always do), some extra thresholding might be needed as mentioned in Appendix A.8. Similarly to GraN-DAG’s pruning phase, this step can seriously reduce the quality of the fit. To avoid this, we also perform a retraining phase for NOTEARS and DAG-GNN. The model of CAM is also retrained after its pruning phase prior to evaluating its held-out score." } ]
2,020
GRADIENT-BASED NEURAL DAG LEARNING
SP:4aebddd56e10489765e302e291cf41589d02b530
[ "The paper presents a new NN architecture designed for life-long learning of natural language processing. As well depicted in Figure 2, the proposed network is trained to generate the correct answers and training samples at the same time. This prevents the \"catastrophic forgetting\" of an old task. Compared to the old methods that train a separate generator, the performance of the proposed method is noticeably good as shown in Fig 3. This demonstrates that the new life-long learning approach is effective in avoiding catastrophic forgetting.", "This paper studies the problem of lifelong language learning. The core idea underlying the algorithm includes two parts: 1. Consider the NLP tasks as QA and then train a LM model that generates an answer based on the context and the question; 2. to generate samples representing previous tasks before training on a new task. " ]
Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2–3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at https://github.com/jojotenya/LAMOL.
[ { "affiliations": [], "name": "Fan-Keng Sun" }, { "affiliations": [], "name": "Cheng-Hao Ho" }, { "affiliations": [], "name": "Hung-Yi Lee" } ]
[ { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Caiming Xiong Bryan McCann", "Nitish Shirish Keskar", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "arXiv preprint arXiv:1806.08730,", "year": 2018 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "arXiv preprint arXiv:1812.00420,", "year": 2018 }, { "authors": [ "Tianqi Chen", "Ian Goodfellow", "Jonathon Shlens" ], "title": "Net2net: Accelerating learning via knowledge transfer", "venue": "arXiv preprint arXiv:1511.05641,", "year": 2015 }, { "authors": [ "Zhiyuan Chen", "Bing Liu" ], "title": "Lifelong Machine Learning", "venue": null, "year": 2016 }, { "authors": [ "Zhiyuan Chen", "Nianzu Ma", "Bing Liu" ], "title": "Lifelong learning for sentiment classification", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "year": 2015 }, { "authors": [ "Cyprien de Masson d’Autume", "Sebastian Ruder", "Lingpeng Kong", "Dani Yogatama" ], "title": "Episodic memory in lifelong language learning", "venue": "arXiv preprint arXiv:1906.01076,", "year": 2019 }, { "authors": [ "Mohamed Elhoseiny", "Francesca Babiloni", "Rahaf Aljundi", "Marcus Rohrbach", "Manohar Paluri", "Tinne Tuytelaars" ], "title": "Exploring the challenges towards lifelong fact learning", "venue": "In Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "Jaehong Kim Hanul Shin", "Jung Kwon Lee", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "arXiv preprint arXiv:1705.08690,", "year": 2017 }, { "authors": [ "Luheng He", "Kenton Lee", "Mike Lewis", "Luke Zettlemoyer" ], "title": "Deep semantic role labeling: What works and whats next", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Ronald Kemker", "Christopher Kanan" ], "title": "Fearnet: Brain-inspired model for incremental learning", "venue": "arXiv preprint arXiv:1711.10563,", "year": 2017 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sungjin Lee" ], "title": "Toward continual learning for conversational agents", "venue": "In arXiv,", "year": 2017 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Tianlin Liu", "Lyle Ungar", "João Sedoc" ], "title": "Continual learning for sentence representations using conceptors", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "David Lopez-Paz" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Arun Mallya", "Svetlana Lazebnik" ], "title": "Packnet: Adding multiple tasks to a single network by iterative pruning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Alec Radford", "Rafal Jozefowicz", "Ilya Sutskever" ], "title": "Learning to generate reviews and discovering sentiment", "venue": "arXiv preprint arXiv:1704.01444,", "year": 2017 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Shagun Sodhani", "Sarath Chandar", "Yoshua Bengio" ], "title": "On training recurrent neural networks for lifelong learning", "venue": "arXiv preprint arXiv:1811.07017,", "year": 2018 }, { "authors": [ "Tsung-Hsien Wen", "David Vandyke", "Nikola Mrksic", "Milica Gasic", "Lina M Rojas-Barahona", "Pei-Hao Su", "Stefan Ultes", "Steve Young" ], "title": "A network-based end-to-end trainable task-oriented dialogue system", "venue": "arXiv preprint arXiv:1604.04562,", "year": 2016 }, { "authors": [ "R. Xia", "J. Jiang", "H. He" ], "title": "Distantly supervised lifelong learning for large-scale social media sentiment analysis", "venue": "IEEE Transactions on Affective Computing,", "year": 2017 }, { "authors": [ "Yann LeCun Xiang Zhang", "Junbo Zhao" ], "title": "Character-level convolutional networks for text classification", "venue": "arXiv preprint arXiv:1509.01626,", "year": 2015 }, { "authors": [ "Hu Xu", "Bing Liu", "Lei Shu", "Philip S. Yu" ], "title": "Lifelong domain word embedding via meta-learning", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "venue": "arXiv preprint arXiv:1709.00103,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose; this is isolated learning (Chen & Liu, 2016, p. 150). In isolated learning, the model is unable to retain and accumulate the knowledge it has learned before. When a stream of tasks are joined to be trained sequentially, isolated learning faces catastrophic forgetting (McCloskey & Cohen, 1989) due to a non-stationary data distribution that biases the model (left figure of Figure 1). In contrast, lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks. A human easily achieves lifelong learning, but this is nontrivial for a machine; thus lifelong learning is a vital step toward artificial general intelligence.\nIn this paper, we focus on lifelong language learning, where a machine achieves lifelong learning on a stream of natural language processing (NLP) tasks. To the best of our knowledge, lifelong language learning has been studied in only a few instances; for sentiment analysis (Chen et al., 2015b; Xia et al., 2017), conversational agents (Lee, 2017), word representation learning (Xu et al., 2018), sentence representation learning (Liu et al., 2019), text classification, and question answering (d’Autume et al., 2019). However, in all previous work, the tasks in the stream are essentially the same task but in different domains. To achieve lifelong language learning on fundamentally different tasks, we propose LAMOL — LAnguage MOdeling for Lifelong language learning.\nIt has been shown that many NLP tasks can be considered question answering (QA) (Bryan McCann & Socher, 2018). Therefore, we address multiple NLP tasks with a single model by training a language model (LM) that generates an answer based on the context and the question. Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling (Radford et al., 2019); however, this does not directly solve the problem of LLL. If we train an LM on a stream of tasks, catastrophic forgetting still occurs. However, as an LM is intrinsically a text generator, we can use it to answer questions while generating pseudo-samples of\n∗Equal contribution. †Work done while at National Taiwan University.\nthe previous task to be replayed later. LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks (middle of Figure 1) (Hanul Shin & Kim, 2017; Kemker & Kanan, 2017). In contrast to previous approaches, LAMOL needs no extra generator (right of Figure 1). LAMOL is also similar to multitask training, but the model itself generates data from previous tasks instead of using real data.\nOur main contributions in this paper are:\n• We present LAMOL, a simple yet effective method for LLL. Our method has the advantages of no requirements in terms of extra memory or model capacity. We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed.\n• Experimental results show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2–3%.\n• Furthermore, we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks. This extension stabilizes LLL and is particularly useful when training on a large number of tasks.\n• We analyze how different amounts of pseudo-samples affect the final performance of LAMOL, considering results both with and without the task-specific tokens.\n• We open-source our code to facilitate further LLL research." }, { "heading": "2 RELATED WORK", "text": "Lifelong learning research is based on regularization, architecture, or data. Here is a brief survey of works in these three categories." }, { "heading": "2.1 REGULARIZATION-BASED METHODS", "text": "In this approach, a constraint, i.e., a regularization term, is added to minimize deviation from trained weights while updating the weights in a new task. Most regularization based methods estimate the importance of each parameter and add the importance as a constraint to the loss function. Elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) calculates a Fisher information matrix to estimate the sensitivity of parameters as importance. Online EWC (Schwarz et al., 2018) is a transformed version of EWC. Instead of tracking the importance of parameters for each task, online EWC simply accumulates the importance of the stream of tasks. Synaptic intelligence (SI) (Zenke et al., 2017) assigns importance to each parameter according to its contribution to the change in the total loss. Memory aware synapses (MAS) (Aljundi et al., 2018) estimate importance via the gradients of the model outputs. In contrast to estimating the importance of weights, incremental moment matching (IMM) (Lee et al., 2017) matches the moment of weights between different tasks." }, { "heading": "2.2 ARCHITECTURE-BASED METHODS", "text": "For this category, the main idea is to assign a dedicated capacity inside a model for each task. After completing a task, the weights are frozen and may not be changed thereafter. Some methods allow models to expand, whereas some fix the size but must allocate capacity for tasks at the beginning. Progressive neural networks (Rusu et al., 2016) utilize one column of the neural network per task. Once a new task is trained, progressive neural networks augment a new column of the neural network for the task while freezing the past trained columns. Columns that have been frozen are not allowed to change but are connected to the new column to transfer knowledge from old tasks. Towards Training Recurrent Neural Networks for Lifelong Learning (Sodhani et al., 2018) unifies Gradient episodic memory (Lopez-Paz et al., 2017) and Net2Net (Chen et al., 2015a). Using the curriculumbased setting, the model learns the tasks in easy-to-hard order. The model alleviates the forgetting problem by GEM method, and if it fails to learn the current task and has not been expanded yet, the model will expand to a larger model by the Net2Net approach.\nPathNet (Fernando et al., 2017) reuses subsets of a neural network to transfer knowledge between tasks. Unlike progressive neural networks, PathNet does not allow the model to expand. Instead, it builds a huge fixed-size model composed of a neural network and paths between different layers of the neural networks. While training a task, it selects the best combination of neural networks and paths for that particular task. Similar to progressive neural networks, selected parts are fixed to allow only inference and not training. Inspired by network pruning, PackNet (Mallya & Lazebnik, 2018) prunes and re-trains the network iteratively to pack numerous tasks into a single huge model.\nThis category has some drawbacks. When resources are limited, model expansion is prohibited. Also, some architecture-based methods require the number of tasks in advance to allocate the capacity for the tasks, which greatly reduces their practicality." }, { "heading": "2.3 DATA-BASED METHODS", "text": "This method restricts weights through the data distribution of old tasks. One data-based approach keeps a small amount of real samples from old tasks, and the other distills the knowledge from old data and imagines pseudo-data of old tasks later on. While training a new task, the data or pseudo-data is used to prevent weights from greatly deviating from the previous status.\nGradient episodic memory (GEM) (Lopez-Paz et al., 2017) preserves a subset of real samples from previous tasks. Utilizing these real samples during optimization helps somewhat to constrain parameter gradients. Averaged-GEM (A-GEM) (Chaudhry et al., 2018) is a more efficient version of GEM which achieves the same or even better performance than the original GEM. Learning without forgetting (Li & Hoiem, 2017) minimizes the alteration of shared parameters by recording the outputs from old task modules on data from the new task before updating. Hanul Shin & Kim (2017) and Kemker & Kanan (2017) encode data from old tasks into a generative model system. The latter imitates the dual-memory system of the human brain, in that the model automatically decides which memory should be consolidated. Both methods replay pseudo-data of previous tasks using the generative model during training.\nd’Autume et al. (2019) investigates the performance of the episodic memory system on NLP problems. It distills the knowledge of previous tasks into episodic memory and replays it afterward. This work evaluates the method on two streams of tasks: question answering and text classification." }, { "heading": "3 LAMOL", "text": "A pre-trained LM can generate a coherent sequence of text given a context. Thus, we propose LAMOL, a method of training a single LM that learns not only to answer the question given the context but also to generate the context, the question, and the answer given a generation token. That is, in LAMOL, a model plays the role of both LM and QA model. Hence, answering questions and generating pseudo-old samples can both be done by a single model. During LLL, these pseudo-old samples are trained with new samples from new tasks to help mitigate catastrophic forgetting." }, { "heading": "3.1 DATA FORMATTING", "text": "Inspired by the protocol used by decaNLP (Bryan McCann & Socher, 2018), samples from the datasets we used are framed into a SQuAD-like scheme, which consists of context, question, and answer. Although the LM is simultaneously a QA model, the data format depends on the training objective. When training as a QA model, the LM learns to decode the answer after reading the context and question. On the other hand, when training as an LM, the LM learns to decode all three parts given a generation token.\nIn addition to context, question, and answer, we add three special tokens:\nANS Inserted between question and answer. As the context and question are known during inference, decoding starts after inputting ANS.\nEOS The last token of every example. Decoding stops when EOS is encountered. GEN The first token during pseudo-sample generation. Decoding starts after inputting GEN.\nThe data formats for QA and LM training are shown in Figure 2." }, { "heading": "3.2 TRAINING", "text": "Assume a stream of tasks {T1, T2, . . . }, where the number of tasks may be unknown. Directly training the LM on these tasks sequentially results in catastrophic forgetting. Thus, before beginning training on a new task Ti, i > 1, the model first generates pseudo samples T ′\ni by top-k sampling that represent the data distribution of previous tasks T1, . . . , Ti−1. Then, the LM trains on the mixture of Ti and T ′ i . To balance the ratio between |Ti| and |T ′\ni |, the LM generates γ|Ti| pseudo samples, where |Ti| denotes the number of samples in task Ti and γ is the sampling ratio. If the generated sample does not have exactly one ANS in it, then the sample is discarded. This happens in only 0.5%-1% of generated samples.\nDuring training, each sample is formatted into both the QA format and the LM format. Then, in the same optimization step, both formats are fed into the LM to minimize the QA loss LQA and LM loss LLM together. Overall, the loss is L = LQA + λLLM, where λ is the weight of the LM loss." }, { "heading": "3.3 TASK-SPECIFIC TOKENS", "text": "Using the same GEN token for all tasks is problematic when training for many tasks because the portion of old tasks decreases exponentially in theory. For instance, if γ = 0.01, then the portion of the first task when training the second task is about 1%, but is only about 0.01% when training the third task. This issue is definitely harmful to LLL. To mitigate this, we can choose to replace the GEN token with a task-specific token for each task to inform the model to generate pseudo-samples belonging to the specific task. Under this setup, all previous tasks have the same share of the γ|Ti| generated pseudo samples. That is, when beginning training for the i-th task Ti, we generate γi−1 |Ti|\nfor the previous i− 1 tasks. Note that as each task uses a specific token, the vocabulary size and the embedding weight of the LM increase slightly as more tasks are trained." }, { "heading": "4 EXPERIMENT SETUP", "text": "" }, { "heading": "4.1 TASKS, DATASETS, AND METRICS", "text": "We collect five disparate tasks mentioned in decaNLP (Bryan McCann & Socher, 2018): question answering, semantic parsing, sentiment analysis, semantic role labeling, and goal-oriented dialogue, with a dataset for each task.\nFurthermore, to compare our method with d’Autume et al. (2019), we conducted experiments on four text classification tasks: news classification, sentiment analysis, Wikipedia article classification, and question-and-answer categorization with five datasets. We use the procedure from d’Autume et al. (2019) to produce equal-sized datasets.\nWe do not train on all datasets from both papers due to a lack of computational resources. For each task, there is a corresponding evaluation metric. Table 1 contains a summary of tasks, datasets, and metrics. Additional details are provided in Appendix A. Note that the score of any metric lies between 0 and 100%." }, { "heading": "4.2 METHODS TO BE COMPARED", "text": "All methods use the smallest pre-trained GPT-2 model (Radford et al., 2019)1 as the LM. Each task is trained for nine epochs; greedy decoding is applied during inference.\n• LAMOL In all experiments, k = 20 in top-k sampling and λ = 0.25 for weight of the LM loss are set. LAMOLγGEN denotes LAMOL with a sampling ratio of γ, and the same GEN token is used for all tasks. If the task-specific tokens are used, GEN is replaced by TASK.\n• Keep real data Pseudo-samples are replaced by real samples from previous tasks. The quantity of real samples is equally split between previous tasks. This approach can be considered the upper bound of LAMOL. We denote it as LAMOLγREAL. 1https://github.com/huggingface/pytorch-transformers\n• Fine-tune The model is directly fine-tuned on the stream of tasks, one after another.\n• Multitask learning All tasks are trained simultaneously. Multitask learning is often seen as an upper bound of lifelong learning. In addition, it is also used to determine whether forgetting is caused by a lack of model capacity.\n• Regularization-based methods Online EWC (Schwarz et al., 2018) and MAS (Aljundi et al., 2018) are compared. They are chosen because they are more computationally efficient than SI (Zenke et al., 2017) and more memory efficient than IMM (Lee et al., 2017). Additionally, experiments such as Elhoseiny et al. (2018) show that MAS has better performance overall.\n• Gradient Episodic Memory (GEM) When training each task, we randomly sample data from previous task with the amount equivalent to 5% of the current task size into the memory. In each optimization step, the GEM (Lopez-Paz et al., 2017) approach retrieves all the data in the memory to calculate the gradients for the previous tasks.\n• Improved memory-based parameter adaptation (MBPA++) Sparse experience replay and local adaptation for LLL as proposed in d’Autume et al. (2019). We also re-implement the paper and report better scores using different hyperparameters." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "5.1 SINGLE TASK", "text": "To establish a reference on the capability of the GPT-2 model on every dataset, we trained the model on each dataset independently. The results are shown in Table 2. We observe that the performance of the GPT-2 model is actually quite good, even beating the BERT-based model (d’Autume et al., 2019) on text classification datasets by a large margin. Thus, the GPT-2 model has the potential for superior LLL performance, as long as we can prevent catastrophic forgetting." }, { "heading": "5.2 SST, QA-SRL, AND WOZ TASKS", "text": "For an initial understanding of the performance on all of the methods and the effect of task order, we first conducted a small-scale experiment on three small datasets: SST, QA-SRL, and WOZ. We trained all but the the multitasked method on all six permutations of the task order. The final score for each order was obtained by evaluating the model at the conclusion of the training process. The results are shown in Table 3; we make several observations. Note that LAMOL with γ = 0 is not the same as Fine-tuned, as the LM loss is still optimized.\n• Fine-tuned, EWC, MAS, and LAMOL with γ = 0 show similar performance and are much worse than LAMOL with γ > 0.\n• LAMOL0.2GEN, our best performing method, is only 1.8 percent away from Multitasked, which implies almost no forgetting during LLL.\n• The order of the tasks is crucial to the performance. For instance, the WOZ score drops significantly after training other tasks. Thus, if WOZ is not the last task, the performance is usually noticeably worse.\n• When using LAMOL, the performance of old tasks maintains almost the same level throughout the training process. When the sampling ratio γ is increased, the performance also increases, especially when increased from 0 to 0.05.\n• When γ = 0, adding task-specific tokens harms performance, because the model must fit additional special tokens that are useless. Adding task-specific tokens is also not helpful if γ = 0.2. We believe that 0.2 is enough for three tasks; thus task-specific tokens are redundant. However, when γ = 0.05, task-specific tokens are beneficial because the tokens are needed to help retain a substantial presence of the first task when training the third task.\n• We see that a better LLL method usually has a smaller standard deviation, which implies that it is effected less by task order. Adding task-specific tokens also has a stabilizing effect.\nThe complete forgetting progress is illustrated in Appendix B. Clearly, Fine-tuned, EWC, MAS, LAMOL0GEN, and LAMOL 0 TASK reveal similar patterns. However, the proposed LAMOL with γ > 0 displays the ability to retain its learned knowledge. In the case of WOZ→ SRL→ SST, the WOZ score even increases after training the third task using LAMOL with γ = 0.2." }, { "heading": "5.3 FIVE DECANLP TASKS", "text": "Here, we train the following five tasks sequentially: SQuAD, WikiSQL, SST, QA-SRL, and WOZ. Given the limited computing resources, we explore only one task order: from large to small tasks, according to the number of training samples.\nAs shown in Table 4, LAMOL outperforms all baselines by a large margin and on average approaches within 2–3% of the multitasked upper bound. Also, as expected, the performance of LAMOL improves as the sampling ratio γ increases and task-specific tokens are used.\nThere is also a gap between our method and the method of keeping real samples. As shown in the table, using real samples is much more sample-efficient, as 5% of real samples beats 20% of pseudo-samples. This may be due to the less-than-ideal quality of the pseudo-data. The longer the paragraphs are, the harder it is for the model to create high-quality samples. After observing the samples generated when using task-specific tokens, we discover some “chaos”. That is, some examples generated by the model do not exactly correspond to the task-specific token. This implies that the task-specific tokens are sometimes too weak to constrain the model; thus their influence is overshadowed by other tokens. We believe that solving this problem will bring the performance when using task-specific tokens closer to using real samples; however, we leave this as future work.\nFigure 3 illustrates the test scores of each method on each task throughout the training. We clearly see that when using LAMOL, the model remembers nearly perfectly.\nWe make several observations:\n• When training SQuAD, QA-SRL has not been trained yet, but the score of QA-SRL is already around 40. Also, when training QA-SRL, the SQuAD score revives if the model\nhas forgotten SQuAD. These two facts imply that SQuAD and SRL are similar tasks, such that the model is capable of transferring knowledge from one to the other.\n• If forward transfer exists, replaying pseudo-data also retains the forward transfer. That is, the QA-SRL score does not drop after training on WikiSQL and SST when LAMOL is used but drops significantly for other methods.\n• The transferability between SQuAD and QA-SRL is expected. On the other hand, the transferability between WikiSQL and QA-SRL is quite surprising; the WikiSQL score improves considerably when training on QA-SRL for Fine-tuned and MAS after WikiSQL is forgotten during SST training." }, { "heading": "5.4 TEXT CLASSIFICATION TASKS", "text": "We compared the proposed method against the state-of-the-art MBPA++ proposed in d’Autume et al. (2019), both by citing their original numbers and also by reproducing their methods. We chose text classification as opposed to QA because we believe that LM has more of a disadvantage in text classification than in QA. We compared with LAMOL0.2TASK due to its good performance and stability. Following their paper and testing our model on the same four kinds of task orders, the results are shown in Table 5.\nOur implementation results in much higher scores than the original ones. However, the proposed LAMOL0.2TASK still outperforms our implementation of MBPA++.\n5.5 INFLUENCE OF SAMPLING RATIO γ\nAs the value of γ determines the performance of LLL, we conducted a medium-scale experiment to understand the influence of γ with and without task-specific tokens. In this experiment we used\nWikiSQL (blue color), SST (orange), QA-SRL (green), and WOZ (red), in that training order. The results are shown in Figure 4.\nUnsurprisingly, the less generation done by the model, the more likely the vanishing distribution in Section 3 occurs: the model forgets how to generate previous tasks, as the ratio of previous tasks in the total dataset decreases exponentially over time. Models using task-specific tokens mitigate this somewhat, as demonstrated in the first subgraph where the performance of LAMOL0.03TASK is much better than that of LAMOL0.03GEN.\nIn addition, the more samples the model generates, the better the overall performance of the model. However, this performance gain disappears when the sampling ratio γ is around 0.1 to 0.3." }, { "heading": "6 CONCLUSION", "text": "We propose LAMOL, a simple yet effective method for LLL based on language modeling. A single LM achieves LLL without additional model components and without keeping old examples. Moreover, any pre-trained LM can be used to leverage a large amount of unlabeled text to improve LLL. Finally, more tasks can be added whenever needed." }, { "heading": "ACKNOWLEDGEMENT", "text": "This work was supported by the Ministry of Science and Technology of Taiwan." }, { "heading": "A TASKS, DATASET, AND METRICS", "text": "Five tasks and their corresponding datasets from decaNLP (Bryan McCann & Socher, 2018):\n• Question Answering – Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016): This dataset consists of context, questions, and answers. The context is paragraphs from English Wikipedia, and the answers are spans from its corresponding question paragraphs. For evaluation, we use the normalized F1 score (nF1), which strips out articles and punctuation as in Bryan McCann & Socher (2018). Test datasets in this task are hidden from the host so that users must upload models to their platform to generate the test results; due to this inconvenience and our many models, we elected to use the development set to test the metric. Note that we do not use the development set in the training process. The size of the training set is 87,599 while that of the development set is 10,570.\n• Semantic Parsing – WikiSQL (Zhong et al., 2017): In this task, normal sentences are translated into SQL-structured SQL queries. WikiSQL provides logical forms along with natural language utterances. The exact match of the logical forms (lfEM) is used to evaluate the performance. The model outputs are required to be matched the SQL format. Otherwise, its won’t get any score. The size of the training set is 56,355; that of the test set is 15,878.\n• Sentiment Analysis – Stanford Sentiment Treebank (SST, binary version) (Radford et al., 2017): This dataset consists of movie reviews with its answers, including positive and negative binary options. The exact match score is used as the metric. The size of the training set is 6,920; that of the test set is 1,821.\n• Semantic Role Labeling – QA-SRL (He et al., 2017): QA-SRL is a question answering form of the SRL task. The normalized F1 (nF1) score is used. The size of the training set is 6,414; that of the test set is 2,201.\n• Goal-Oriented Dialogue – English Wizard of Oz (WOZ) (Wen et al., 2016): WOZ is a restaurant reservation task that provides a predefined ontology of a series of information for helping an agent to make reservations for customers. To keep track of the dialogue state, turn-based dialogue state EM (dsEM), which requires the model outputs exactly follow the characters’ conversation order, is used for judgment. The size of the training set is 2,536; that of the test set is 1,646.\nFour text classification tasks and five datasets from MBPA++ (dAutume et al. 2019):\n• News Classification – AGNews: News articles to be classified into 4 classes. • Sentiment Analysis – Yelp and Amazon: Customer reviews and ratings on Yelp and\nAmazon. Both datasets include 5 classes. • Wikipedia Article Classification – DBPedia: Articles and their corresponding categories\non Wikipedia, including 14 classes. • Questions and Answers Categorization – Yahoo: Questions and answers on the Yahoo!\nplatform, including 10 classes.\nThe dataset collected by Xiang Zhang (2015) is available at http://goo.gl/JyCnZq. Given the unbalanced dataset sizes, we randomly sample 115,000 training examples and 7,600 test examples from all the datasets per d’Autume et al. (2019). All the tasks use exact match accuracy as the evaluation metric." }, { "heading": "B OVERVIEW OF THE FORGETTING PROGRESS FOR THREE TASKS", "text": "" }, { "heading": "C REVERSE ORDER OF FIVE DECANLP TASKS", "text": "" }, { "heading": "D GENERATED EXAMPLES", "text": "" } ]
2,019
LAMOL: LANGUAGE MODELING FOR LIFELONG LANGUAGE LEARNING
SP:bce4d9d2825454f2b345f4650abac10efee7c2fb
[ "The problem addressed by this paper is the estimation of trajectories of moving objects thrown / launched by a user, in particular in computer games like angry birds or basketball simulation games. A deep neural network is trained on a small dataset of ~ 300 trajectories and estimates the underlying physical properties of the trajectory (initial position, direction and strength of initial force etc.). A new variant of deep network is introduced, which is based on an encoder-decoder model, the decoder being a fully handcrafted module using known physics (projectile motion).", "This paper proposes an architecture that encodes a known physics motion equation of a trajectory of a moving object. The modeled equation has 3 variables and the network works in a latent space- contrary to taking raw images. It uses an auxiliary network (named InferNet) to train the final one used at inference time (named RelateNet). The former aims to reconstruct the input sequence of positions representing trajectory, and has intermediate 3 latent variables that correspond to the 3 variables of the modeled equation, while as decoder it uses the modeled known equation itself. The latter is a mapping from the relative position of the object to 2 latent variables of the former InferNet, and is trained with MSE loss. At inference, RelateNet takes as input the relative position of the object, predicts 2 variables of the equation and finally uses the motion equation to calculate the trajectory." ]
In this work we present an approach that combines deep learning together with laws of Newton’s physics for accurate trajectory predictions in physical games. Our model learns to estimate physical properties and forces that generated given observations, learns the relationships between available player’s actions and estimated physical properties and uses these extracted forces for predictions. We show the advantages of using physical laws together with deep learning by evaluating it against two baseline models that automatically discover features from the data without such a knowledge. We evaluate our model abilities to extract physical properties and to generalize to unseen trajectories in two games with a shooting mechanism. We also evaluate our model capabilities to transfer learned knowledge from a 2D game for predictions in a 3D game with a similar physics. We show that by using physical laws together with deep learning we achieve a better human-interpretability of learned physical properties, transfer of knowledge to a game with similar physics and very accurate predictions for previously unseen data.
[]
[ { "authors": [ "Rene Baillargeon" ], "title": "Physical reasoning in infancy", "venue": "Advances in infancy research,", "year": 1995 }, { "authors": [ "Peter W. Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende", "Koray Kavukcuoglu" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": null, "year": 2016 }, { "authors": [ "Michael B. Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B. Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": null, "year": 2016 }, { "authors": [ "Nikhil Nagori" ], "title": "Basketball 3d shooter", "venue": "https://code-projects.org/ basketball-shooter-game-in-unity-engine-with-source-code/,", "year": 2017 }, { "authors": [ "Jochen Renz", "Xiaoyu Ge", "Matthew Stephenson", "Peng Zhang" ], "title": "Ai meets angry birds", "venue": "Nature Machine Intelligence, 1:328,", "year": 2019 }, { "authors": [ "D.E. Rumelhart", "G.E. Hinton", "R.J. Williams" ], "title": "Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter Learning Internal Representations by Error Propagation, pp. 318–362", "venue": null, "year": 1986 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A. Riedmiller", "Raia Hadsell", "Peter W. Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": null, "year": 2018 }, { "authors": [ "James S. Walker" ], "title": "Physics with Mastering Physics, 4th Edition", "venue": null, "year": 2010 }, { "authors": [ "Jiajun Wu", "Ilker Yildirim", "Joseph J Lim", "Bill Freeman", "Josh Tenenbaum" ], "title": "Galileo: Perceiving physical object properties by integrating a physics engine with deep learning", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Jiajun Wu", "Joseph J. Lim", "Hongyi Zhang", "Joshua B. Tenenbaum", "William T. Freeman" ], "title": "Physics 101: Learning physical object properties from unlabeled videos", "venue": null, "year": 2016 }, { "authors": [ "Yu Zhao", "Rennong Yang", "Guillaume Chevalier", "Rajiv Shah", "Rob Romijnders" ], "title": "Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Games that follow Newton’s laws of physics despite being a relatively easy task for humans, remain to be a challenging task for artificially intelligent agents due to the requirements for an agent to understand underlying physical laws and relationships between available player’s actions and their effect in the environment. In order to predict the trajectory of a physical object that was shot using some shooting mechanism, one needs to understand the relationship between initial force that was applied to the object by a mechanism and its initial velocity, have a knowledge of hidden physical forces of the environment such as gravity and be able to use basic physical laws for the prediction. Humans, have the ability to quickly learn such physical laws and properties of objects in a given physical task from pure observations and experience with the environment. In addition, humans tend to use previously learned knowledge in similar tasks. As was found by researchers in human psychology, humans can transfer previously acquired abilities and knowledge to a new task if the domain of the original learning task overlaps with the novel one (Council, 2000).\nThe problem of learning properties of the physical environment and its objects directly from observations and the problem of using previously acquired knowledge in a similar task are important to solve in AI as this is one of the basic abilities of human intelligence that humans learn during infancy (Baillargeon, 1995). Solving these two problems can bring AI research one step closer to achieving human-like or superhuman results in physical games.\nIn this work we explore one of the possible approaches to these two problems by proposing a model that is able to learn underlying physical properties of objects and forces of the environment directly from observations and use the extracted physical properties in order to build a relationships between available in-game variables and related physical forces. Furthermore, our model then uses learned physical knowledge in order to accurately predict unseen objects trajectories in games that follow Newtonian physics and contain some shooting mechanism. We also explore the ability of our model\nto transfer learned knowledge by training a model in a 2D game and testing it in a 3D game that follows similar physics with no further training.\nOur approach combines modern deep learning techniques (LeCun et al., 2015) and well-known physics laws that were discovered by physicists hundreds of years ago. We show that our model automatically learns underlying physical forces directly from the small amount of observations, learns the relationships between learned physical forces with available in-game variables and uses them for prediction of unseen object’s trajectories. Moreover, we also show that our model allows us to easily transfer learned physical forces and knowledge to the game with similar task.\nIn order to evaluate our model abilities to infer physical properties from observations and to predict unseen trajectories, we use two different games that follow Newtonian Physics. The first game that we use as a testing environment for our model is Science Birds. Science Birds is a clone of Angry Birds - a popular video game where the objective is to destroy all green pigs by shooting birds from a slingshot. The game is proven to be difficult for artificially intelligent playing agents that use deep learning and many agents have failed to solve the game in the past (Renz et al., 2019). The second game that we are using as our testing environment is Basketball 3D shooter game. In this game the objective of a player is to shot a ball into a basket.\nIn order to test the abilities of our model to transfer knowledge to a different game we first train our model on a small amount of shot trajectories from Science Birds game and then test trained model for predictions of the ball trajectory in the Basketball 3D shooting game.\nWe compare the results of our proposed model that is augmented with physical laws against a two baseline models. The first baseline model learns to automatically extract features from observations without knowledge of physical laws, whereas the second baseline model learns to directly predict trajectories from the given in-game variables." }, { "heading": "2 RELATED WORK", "text": "Previous AI work in predicting future dynamics of objects has involved using deep learning approaches such as: graph neural networks for prediction of interactions between objects and their future dynamics ( Battaglia et al. (2016); Watters et al. (2017); Sanchez-Gonzalez et al. (2018)), Bidirectional LSTM and Mixture Density network for Basketball Trajectory Prediction ( Zhao et al. (2017)) and Neural Physics Engine (Chang et al. (2016)). Some of the researchers also tried to combine actual physical laws with deep learning. In one of such works, researchers propose a model that learns physical properties of the objects from observations (Wu et al. (2016)). Another work proposes to integrate a physics engine together with deep learning to infer physical properties (Wu et al. (2015))\nHowever, most of the work on predicting future objects dynamics is focused on learning physics from scratch or uses some known physical properties in order to train a model. This could be a problem as in most real-world physical games the underlying physical properties are not known to the player unless one has an access to the source code of a physics engine. Because of that, these properties have to be learned directly from experience with the environment without any supervision on the actual values of physical properties. Another important point, is that instead of learning physics from scratch we can use to our benefit already discovered and well-established laws of physics.\nIn this work we propose an approach that combines classical feedforward networks with well-known physical laws in order to guide our model learning process. By doing so, our model learns physical properties directly from observations without any direct supervision on actual values of these properties. Another contribution is that our model learns from a very small training dataset and generalizes well to the entire space. Furthermore, learned values can be easily interpreted by humans which allows us to use them in any other task in the presented test domains and can be easily transferred to other games with similar physics." }, { "heading": "3 APPROACH", "text": "" }, { "heading": "3.1 BASELINE MODELS", "text": "In order to measure the advantages of combining classical physical laws together with deep learning we are comparing our model against pure deep learning approaches with similar architectures." }, { "heading": "3.1.1 ENCODER BASELINE MODEL", "text": "Our first baseline model is based on the idea of autoencoders (Rumelhart et al. (1986)). Contrary to the proposed model in section 3.2 this model learns to automatically discover features from observations. It takes a sequence of points T = {(x0, y0), (x1, y1), ..., (xn, yn)} as its input and encodes it to a latent space Tenc. The encoded trajectory is then used by decoder to reconstruct the trajectory. The second part of this baseline model consists of another MLP that learns to associate a relative position of a physical object that generated the trajectory with learned latent space Tenc. More formally, given trajectory T as input, encoder fencoder, and decoder fdecoder, our model reconstructs a trajectory T̂ from latent space as follows:\nT̂ = fdecoder(fencoder({(x0, y0), (x1, y1), ..., (xn, yn)})) (1) Once the trajectory is reconstructed, we compute the loss using Mean Squared Error and update the weights of our networks:\n1\nn n∑ i=1 (T − T̂ )2 (2)\nThe second part of this baseline model is a another MLP fassociate that learns to associate given initial relative position of a physical object (xr0 , yr0) with derived in a previous step encoded trajectory Tenc:\nˆTenc = fassociate((xr0 , yr0)) (3)\nIn order to update the weights of fassociate we compute the loss using Mean Squared Error between two derived encodings Tenc and ˆTenc.\nAfter that, we predict trajectory using derived ˆTenc as follows: T̂ = fdecoder( ˆTenc)" }, { "heading": "3.1.2 SIMPLE BASELINE MODEL", "text": "In order to evaluate the advantages of using observations and an encoder-decoder scheme, we use the second baseline model that does not use encoder-decoder and directly learns to predict trajectory\nfrom the given in-game forces or relative position of a physical object. More formally, given relative position of a physical object (xr0 , yr0) and MLP fsimple, we compute the trajectory as follows:\nT̂ = fsimple((xr0 , yr0)) (4)" }, { "heading": "3.2 PHYSICS AWARE MODEL", "text": "Similarly to the first baseline model presented in section 3.1.1, Physics Aware Network (PhysANet) consists of two parts: a neural network that discovers physical forces of the environment and action that generated given observations and a neural network that learns the relationship between the ingame actions or forces and predicted physical values. We further refer to these two parts as InferNet and RelateNet." }, { "heading": "3.2.1 INFERNET", "text": "The goal of InferNet is to extract physical forces that have generated a given trajectory {(x0, y0), (x1, y1)...(xn, yn)} using guidance from known physical equations. The discovered physical forces are then plugged to the projectile motion equation in order to calculate the trajectory.\nInferNet consists of two internal small MLPs that are trained together as shown on Figure 2 (Left). The first MLP takes in a batch of trajectories and predicts a single value of gravity force for all of them. The second MLP takes in a batch of trajectories and for each trajectory in a batch it predicts an initial velocity and angle of a shot. These predicted values are then inserted into a projectile motion equation in order to calculate the resulting trajectory. The projectile motion equation is defined as follows (Walker (2010)):\ny = h+ x tan (θ)− gx 2\n2V 20 cos(θ) 2\n(5)\nIn equation 5, h is the initial height, g is gravity, θ is the angle of a shot and V0 is initial velocity.\nOnce it had calculated the trajectory we compute the loss between observed trajectory T and predicted trajectory T̂ using Mean Squared Error in a similar way as was defined in equation 2." }, { "heading": "3.2.2 RELATENET", "text": "The goal of RelateNet that is shown on Figure 2 (Right) is to learn the relationship between ingame variables and physics forces predicted by InferNet. This network tries to predict extracted by InferNet forces directly from the given in-game values such as relative position of a bird. The in-game variables can be any variables with continuous or discrete domain that can be chosen by the playing agent in order to make a shot. As an example, in-game variables can be the initial forces of the shot or object’s relative position to the shooting mechanism. RelateNet consists of two internal MLPs where first MLP predicts initial velocities and second MLP predicts initial angles. In order to update the weights of both internal MLPs, we calculate the MSE between values predicted by InferNet and values predicted by RelateNet. More details on the architecture of the PhysANet can be found in the Appendix A." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments we are using Science Birds (Ferreira, 2018) and Basketball 3D shooter game (Nagori, 2017) as a testing environments." }, { "heading": "4.1 SCIENCE BIRDS TRAJECTORY PREDICTION", "text": "Science Birds is a clone of Angry Birds which has been one of the most popular video games for a period of several years. The main goal of the game is to kill all green pigs on the level together with applying as much damage as possible to the surrounding structures. The player is provided with a sequence of (sometimes) different birds to shoot from a slingshot. In Science Birds, similarly to Angry Birds all game objects are following the laws of Newton’s Physics (in 2D). In order to predict the outcomes of the shots, the player needs to understand basic physical laws that govern the game and be able to use them for a prediction of a bird’s trajectory.\nScience Birds contains a slingshot (Figure 3) which serves as a shooting mechanism that allows a player to shoot a bird. Player can select a strength and the angle of the shot by changing a relative position of the bird to the center of a slingshot. In order to accurately predict the trajectory of a bird, player needs to have an understanding of the relationship between relative position of the bird to the slingshot and resulting trajectory. The underlying physical properties of the game such as gravity, or initial velocity of a related shot or its angle is unknown to the player and has to be estimated from the observations and experiences.\nThe goal of our model in this experiment is to learn the relationship between relative position of a bird and initial velocity and angle of the resulting shot in order to predict trajectories of previously unseen shots. In this experiment we do not provide actual physical properties or forces to our model and it has to learn them directly from observations.\nIn order to train our model for this experiment, we have collected a small data set of 365 trajectories and relative positions of the bird that generated these trajectories from Science Birds. This data set was then split to train, test and validation data sets, resulting in a training data set that contains only 200 trajectories. We are using such small training data set in order to prevent a model to simply memorize all possible trajectories and instead learn to use extracted physical forces and properties to predict unseen trajectories.\nDuring the training of a model, PhysANet takes in a trajectory as input and estimates the angle θ, initial velocity V0 and gravity g that generated this trajectory. The predicted values are then inserted to projectile motion equation (5) in order to recreate the trajectory. As a second step, the relative\nposition of the bird to the slingshot (x0, y0) is fed as input to the RelateNet which learns to predict values θ and V0 predicted by InferNet.\nDuring the testing of a model, we only feed the relative position of the bird to RelateNet which predicts values θ and V0 that are plugged in to projectile motion equation in order to calculate the trajectory." }, { "heading": "4.2 BASKETBALL 3D SHOOTER GAME", "text": "Basketball 3D shooter is a game where the player has to throw a ball to the basket in order to earn points. In this game the shooting mechanism is different compared to Science Birds and the initial force that is applied to the physical object does not depend on the relative position of that object.\nIn this experiment we are interested in the ability of our model to transfer the learned relationships and physical properties from one game with shooting mechanism to the other. In particular, we were interested in ability of our model to use learned knowledge from Science Birds for predictions of the ball’s trajectory in Basketball 3D Shooter. Because of the absence of the slingshot like mechanism in this game, we train our models to predict the trajectories based on the initial force that is applied to the bird or a ball by a shooting mechanism when launched.\nAs in the Science Birds experiment described in section 4.1, the input to the InferNet is a trajectory {(x0, y0), (x1, y1)...(xn, yn)} and an input to the RelateNet is initial force that is applied to the bird or a basketball ball by a shooting mechanism. We note here that the magnitude of the initial force applied to the ball in Basketball 3D Shooter game is higher than the magnitude of the initial force applied to the bird in Science Birds." }, { "heading": "4.3 TESTING DATASETS", "text": "In order to evaluate our model we are using three testing datasets: Test, Generalization and Basketball. Test dataset contains trajectories generated by previously unseen (by the model) initial forces or relative positions of a bird. Generalization dataset contains trajectories of the shots made to the opposite direction of shots in the training set. In particular, it contains trajectories and the related relative positions of the bird of shots to the left side of a slingshot. The basketball dataset contains trajectories and related to them initial forces that were applied to the ball in the Basketball 3D shooter game.\nThese three datasets are not exposed to our models during the training. In the Basketball experiment described in section 4.2, we do not train our models on the basketball dataset but only on the training dataset from Science Birds." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 PREDICTION OF TRAJECTORIES IN SCIENCE BIRDS", "text": "The results of our first experiment show that PhysANet has the most accurate trajectory prediction out of all tested models. PhysANet was able to learn to associate a position of a bird relative to the slingshot with initial velocity and angle of a shot as shown on Figure 4. In particular Figure 4 shows two learned relationships between position of a bird before it is shot from a slingshot and initial parameters of the shot. The graph on the left side of Figure 4 shows how the initial angle between bird and center of a slingshot affects the initial angle of a trajectory. As an example from this graph we can see that bird positioned at angle of 220 degrees would result in trajectory that has initial angle of roughly 45 degrees which as we know from a trigonometry is close to the ”truth”. The graph on the right side of Figure 4 shows how initial velocity depends on the distance between initial bird position and center of the slingshot. In order to show this relationship we have fixed the bird at an angle of 225 relatively to the center of slingshot and only changed the distance. From this graph we can see that as one could expect the more we extend the slingshot the stronger our shot is going to be and the higher initial velocity will be.\nIn Figure 5 we present a few examples of the trajectory predicted by PhysANet and actual trajectories that bird took. The presented examples show three trajectories from a test dataset which the network\nhad never seen before and one trajectory of the shot to the left side of a slingshot. Despite the fact that our training dataset did not contain the shots to the left side of the slingshot, our model shows a good generalization ability and predicts the trajectory quite accurately. This shows that learned physical properties and relationships between position of a bird and slingshot can be used by PhysANet to accurately predict the trajectories.\nAs shown in Table 1, PhysANet showed significantly better overall accuracy on test and generalization datasets than our two baseline models." }, { "heading": "5.2 TRANSFER OF KNOWLEDGE TO BASKETBALL 3D", "text": "The results in this experiment show that learned physical properties and relationships in Science Birds can be transferred to the Basketball 3D shooter game. Due to the fact that model was trained on data from a 2D environment, we tested the predictions of 3D trajectories by separating prediction process into predictions in x,y and z,y planes. Surprisingly, despite being trained in a 2D environment, PhysANet showed good results of predicting the trajectories in both planes without any additional training in new 3D environment.\nThe Figure 6 shows the examples of predictions of trajectories from the Basketball dataset by PhysANet, BM1 and BM2. The first three pictures show trajectories predicted by PhysANet, fourth picture shows prediction made by BM1 and the last picture shows prediction made by BM2. The first two pictures show predictions for one trajectory in x,y and z,y planes. The rest of the pictures shown in Figure 6 show predictions in x,y plane only. As we can see from Figure 6, despite the fact that initial forces in Basketball 3D shooter game have higher magnitudes, the PhysANet was able to correctly handle it and still predict trajectories in a correct shape and have a relatively low accuracy error. This is a contrary to the two baseline models that could not handle new environment properly and predicted seemingly random trajectories despite being relatively accurate in the predictions in the first experiment described in section 4.\nThe results presented in Table 1 show the comparison of the observed Mean Squared Errors of all three models. These results show that PhysANet surpasses two baseline models in all testing datasets. Thus for example, error shown by PhysANet in Basketball dataset is nearly two times lower than error shown by Baseline Model 1 (BM1) which was described in section 3.1.1. Surprisingly, despite the fact that BM1 showed better results in Test and Generalization datasets than BM2, it showed higher prediction error in the Basketball 3D dataset than a simple baseline model (BM2). Despite the fact that PhysANet showed relatively good results in predicting trajectories in previously unseen game there still seems to be some loss in the accuracy of the predictions. We hypothesise that the loss of the accuracy can be caused by the differences in physics engines of the two games. One of the possible solutions of this problem is to improve our model abilities to adapt to the new environment after observing a few shots, however we leave such an improvement for a future research." }, { "heading": "6 DISCUSSION", "text": "In this work we have showed that combining known physical laws with deep learning can be beneficial for learning more human-interpretable and flexible physical properties directly from the small amount of the observations. We showed that a model that is augmented with physical laws achieved better generalization to unseen trajectories, was able to transfer learned knowledge to a different game with similar physics and made generally more accurate predictions of trajectories than models without such physical knowledge. Our results show that in some situations using already discovered physical laws and equations can be beneficial and can guide deep learning model to learn physical properties and relationships that lead to more accurate predictions than predictions made by models that learned features automatically without any supervision. Because learned values can be easily interpreted by a human these values can be used in potentially new tasks. As an example, learned gravity of the environment can be used for prediction of a physical behaviour of other physical objects in the environment. Another important quality of our approach is that learned knowledge can be transferred to a task with a similar physics as was shown by the Basketball experiment. Lastly, our model can be easily adopted to any other physical task by changing or adding more physical equations and learning different physical properties from the observations. This potentially could allow us to fully master the physics of a game and use learned knowledge as part of artificially intelligent agent in order to achieve human-like performance in physical games.\nOur results show that it could be beneficial to use physical laws that were discovered by physicists hundreds of years ago together with deep learning techniques in order to unleash their full predicting power and bring artificially intelligent agents a step closer to achieving human-like physical reasoning." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B PHYSANET ARCHITECTURE", "text": "As was mentioned in section 3.2, PhysANet has two separate neural networks: InferNet and RelateNet.\nInferNet consists of two neural networks (MLP1 and MLP2) each with a single hidden layer of size 200 and 100 respectively. MLP1 takes in trajectories consisting of at most 25 points. Trajectories\nwith less than 25 points are padded with zeros, and trajectories with more than 25 points are cut. Trajectories and related in-game variables are fed to the network in batches of size 8.\nRelateNet also consists of two neural networks (MLP 1 and MLP 2) each with a single hidden layer of size 100. MLP 1 and MLP 2 both take in the in-game variables in batches of size 8.\nFor all hidden layers we are using ReLU activation function and no activation function for the last layer.\nIn order to train PhysANet, we first train InferNet until it converges to value of a loss being close to zero and after that we train RelateNet. Simultaneous training of both networks is also possible, however in our experiments training RelateNet after InferNet showed better results." } ]
2,019
LEARNING UNDERLYING PHYSICAL PROPERTIES FROM OBSERVATIONS FOR TRAJECTORY PREDIC-
SP:f6af733aa873bf6ee0f69ec868a2d7a493a0dd0b
[ "The suggest two improvements to boundary detection models: (1) a curriculum learning approach, and (2) augmenting CNNs with features derived from a wavelet transform. For (1), they train half of the epochs with a target boundary that is the intersection between a Canny edge filter and the dilated groundtruth. The second half of epochs is with the normal groundtruth. For (2), they compute multiscale wavelet transforms, and combine it with each scale of CNN features. They find on a toy MNIST example that the wavelet transform doesn’t impact results very much and curriculum learning seems to provide some gains. On the Aerial Road Contours dataset, they find an improvement of ~15% mAP over the prior baseline (CASENet).", "The main idea of the paper is adding a curriculum learning-based extension to CASEnet, a boundary detection method from 2017. In the first phase, the loss emphasizes easier examples with high gradient in the image, and in the second phase, the method is trained on all boundary pixels. This change seems to improve edge detection performance on a toy MNIST and an aerial dataset. " ]
This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applications. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to the boundaries; or boundary pixels with insignificant edge strengths. Therefore, the training process needs to account for different levels of learning complexity in different regions to overcome false alarms. In this work, we devise a curriculumlearning-based training process for object boundary detection. This multi-stage training process first trains the network at simpler pixels (with sufficient edge strengths) and then at harder pixels in the later stages of the curriculum. We also propose a novel system for object boundary detection that relies on a fully convolutional neural network (FCN) and wavelet decomposition of image frequencies. This system uses high-frequency bands from the wavelet pyramid and augments them to conv features from different layers of FCN. Our ablation studies with contourMNIST dataset, a simulated digit contours from MNIST, demonstrate that this explicit high-frequency augmentation helps the model to converge faster. Our model trained by the proposed curriculum scheme outperforms a state-of-the-art object boundary detection method by a significant margin on a challenging aerial image dataset.
[]
[ { "authors": [ "David Acuna", "Amlan Kar", "Sanja Fidler" ], "title": "Devil is in the edges: Learning semantic boundaries from noisy annotations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Eugene L Allgower", "Kurt Georg" ], "title": "Numerical continuation methods: an introduction, volume 13", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Gedas Bertasius", "Jianbo Shi", "Lorenzo Torresani" ], "title": "High-for-low and low-for-high: Efficient boundary detection from deep object features and its applications to high-level vision", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 504–512,", "year": 2015 }, { "authors": [ "John Canny" ], "title": "A computational approach to edge detection", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1986 }, { "authors": [ "Li Deng" ], "title": "The mnist database of handwritten digit images for machine learning research [best of the web", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "SM Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Geoffrey E Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Mark Everingham", "SM Ali Eslami", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Bharath Hariharan", "Pablo Arbeláez", "Lubomir Bourdev", "Subhransu Maji", "Jitendra Malik" ], "title": "Semantic contours from inverse detectors", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Eddy Ilg", "Nikolaus Mayer", "Tonmoy Saikia", "Margret Keuper", "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Flownet 2.0: Evolution of optical flow estimation with deep networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Aleksandar Jevtić", "Ignacio Melgar", "Diego Andina" ], "title": "Ant based edge linking algorithm", "venue": "In 2009 35th Annual Conference of IEEE Industrial Electronics,", "year": 2009 }, { "authors": [ "David J. Kriegman", "Jean Ponce" ], "title": "On recognizing and positioning curved 3-d objects from image contours", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1990 }, { "authors": [ "Manohar Kuse", "Shaojie Shen" ], "title": "Robust camera motion estimation using direct edge alignment and sub-gradient method", "venue": "In 2016 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Yehezkel Lamdan", "Haim J Wolfson" ], "title": "Geometric hashing: A general and efficient model-based recognition", "venue": null, "year": 1988 }, { "authors": [ "David C Lee", "Martial Hebert", "Takeo Kanade" ], "title": "Geometric reasoning for single image structure recovery", "venue": "In 2009 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Jia-Guu Leu", "Limin Chen" ], "title": "Polygonal approximation of 2-d shapes through boundary merging", "venue": "Pattern Recognition Letters,", "year": 1988 }, { "authors": [ "Julien Mairal", "Marius Leordeanu", "Francis Bach", "Martial Hebert", "Jean Ponce" ], "title": "Discriminative sparse image models for class-specific edge detection and image interpretation", "venue": "In European Conference on Computer Vision,", "year": 2008 }, { "authors": [ "Jitendra Malik", "Dror Maydan" ], "title": "Recovering three-dimensional shape from a single image of curved objects", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1989 }, { "authors": [ "Kevis-Kokitsi Maninis", "Jordi Pont-Tuset", "Pablo Arbeláez", "Luc Van Gool" ], "title": "Deep retinal image understanding", "venue": "In International conference on medical image computing and computer-assisted intervention,", "year": 2016 }, { "authors": [ "Luke Metz", "Ben Poole", "David Pfau", "Jascha Sohl-Dickstein" ], "title": "Unrolled generative adversarial networks", "venue": "CoRR, abs/1611.02163,", "year": 2016 }, { "authors": [ "Mukta Prasad", "Andrew Zisserman", "Andrew Fitzgibbon", "M Pawan Kumar", "Philip HS Torr" ], "title": "Learning class-specific edges for object detection and segmentation", "venue": "In Computer Vision, Graphics and Image Processing,", "year": 2006 }, { "authors": [ "Visvanathan Ramesh" ], "title": "Performance characterization of image understanding algorithms", "venue": "PhD thesis, Citeseer,", "year": 1995 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Mallat Stephane" ], "title": "A wavelet tour of signal processing", "venue": "The Sparse Way,", "year": 1999 }, { "authors": [ "Shenlong Wang", "Sanja Fidler", "Raquel Urtasun" ], "title": "Lost shopping! monocular localization in large indoor spaces", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Jimei Yang", "Brian Price", "Scott Cohen", "Honglak Lee", "Ming-Hsuan Yang" ], "title": "Object contour detection with a fully convolutional encoder-decoder network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xin Yu", "Sagar Chaturvedi", "Chen Feng", "Yuichi Taguchi", "Teng-Yok Lee", "Clinton Fernandes", "Srikumar Ramalingam" ], "title": "Vlase: Vehicle localization by aggregating semantic edges", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Zhiding Yu", "Chen Feng", "Ming-Yu Liu", "Srikumar Ramalingam" ], "title": "Casenet: Deep category-aware semantic edge detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Zhiding Yu", "Weiyang Liu", "Yang Zou", "Chen Feng", "Srikumar Ramalingam", "BVK Vijaya Kumar", "Jan Kautz" ], "title": "Simultaneous edge alignment and learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Kaipeng Zhang", "Zhanpeng Zhang", "Zhifeng Li", "Yu Qiao" ], "title": "Joint face detection and alignment using multitask cascaded convolutional networks", "venue": "IEEE Signal Processing Letters,", "year": 2016 }, { "authors": [ "Dongchen Zhu", "Jiamao Li", "Xianshun Wang", "Jingquan Peng", "Wenjun Shi", "Xiaolin Zhang" ], "title": "Semantic edge based disparity estimation using adaptive dynamic programming for binocular", "venue": "sensors. Sensors,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Class-specific object boundary extraction from images is a fundamental problem in Computer Vision (CV). It has been used as a basic module for several applications including object localization [Yu et al. (2018a); Wang et al. (2015)], 3D reconstruction [Lee et al. (2009); Malik & Maydan (1989); Zhu et al. (2018)], image generation [Isola et al. (2017); Wang et al. (2018)], multi-modal image alignment Kuse & Shen (2016), and organ feature extraction from medical images [Maninis et al. (2016)]. Inspired from the sweeping success of deep neural networks in several CV fields, recent works [Yu et al. (2017); Acuna et al. (2019); Yu et al. (2018b)] designed ConvNet-based architectures\nfor object boundary detection and demonstrated impressive results. However, we notice that the results from these methods, as shown in Figure 1b, still suffer from significant false-alarms and misdetections even in regions without any clutter. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to boundarypixels; or boundary pixels with insignificant edge strengths (ex: camouflaged regions). Therefore, the training process needs to account for different levels of learning complexity around different pixels to achieve better performance levels.\nIn classical CV literature, the different levels of pixel complexities are naturally addressed by decomposing the task into a set of sequential sub-tasks of increasing complexities [Leu & Chen (1988); Lamdan & Wolfson (1988); Kriegman & Ponce (1990); Ramesh (1995)]. Most often, boundary detection problem has been decomposed into three sub-tasks: (a) low-level edge detection such as Canny [Canny (1986)]; (b) semantic tagging/labeling of edge pixels [Prasad et al. (2006)] and (c) edge linking/refining [Jevtić et al. (2009)] in ambiguous regions. These approaches first solve the problem for simpler pixels (with sufficient edge strength) and then reason about harder pixels in the regions with ambiguous or missing evidence. However, with the advent of ConvNets, this classical perspective towards boundary extraction problem has been overlooked. New end-to-end trainable ConvNets have pushed the boundaries of state-of-the-art significantly compared to classical methods. However, we believe that classical multi-stage problem-solving schemes can help to improve the performance of ConvNet models. A parallel machine learning field, Curriculum Learning [Bengio et al. (2009)] also advocates this kind of multi-stage training schemes which train the network with a smoother objective first and later with the target task objective. These schemes are proven to improve the generalization of the models and convergences of training processes in several applications. Motivated by these factors, this work devises a curriculum-learning inspired two-stage training scheme for object boundary extraction that trains the networks for simpler tasks first (subtasks a and b) and then, in the second stage, trains to solve the more complex sub-task (c). Our experimental results on a simulated dataset and a real-world aerial image dataset demonstrate that this systematic training indeed results in better performances.\nAs mentioned already, the task of predicting object boundaries is mostly rooted in identifiable higher-frequency image locations. We believe that explicit augmentation of high-frequency contents to ConvNet will improve the convergence of training processes. Hence, this work designs a simple fully convolutional network (FCN) that takes in also high-frequency bands of the image along with the RGB input. Here in this work, we choose to use high-frequency coefficients from wavelet decomposition [Stephane (1999)] of the input image and augment them to conv features at different levels. These coefficients encode local features which are vital in representing sharp boundaries. Our empirical results convey that this explicit high-frequency augmentation helps the model to converge faster, especially in the first stage of curriculum learning.\nIn summary, our contributions in this work are the following:\n• A novel two-stage training scheme (inspired from curriculum-learning) to learn classspecific object boundaries.\n• A novel ConvNet augmented by high-frequency wavelets. • A thorough ablation study on a simulated MNIST digit-contour dataset. • Experiments with a challenging aerial image dataset for road contour extraction • A real-world application of road contour extraction for aligning geo-parcels to aerial im-\nagery.\nRelated Work: The problem of extracting object boundaries has been extensively studied in both classical and modern literature of CV. Most of the classical methods start with low-level edge detectors and use local/global features to attach semantics to the detected pixels. They, later, use object-level understanding or mid-level Gestalt cues to reason about missing edge links and occluded boundaries. The work by Prasad et al. (2006) used local texture patterns and a linear SVM classifier to classify edge pixels given by Canny edge detector. The work by Mairal et al. (2008) reasoned on low-level edges, but learned dictionaries on multiscale RGB patches with sparse coding and used the reconstruction error curves as features for a linear logistic classifier. The work of Hariharan et al. (2011) proposed a detector that combines low-level edge detections and semantic outputs from pre-trained object detectors to localize class-specific contours.\nSeveral recent works Yang et al. (2016); Yu et al. (2017) adopted fully convolutional networks (FCN) for the task of semantic boundary extraction. The work by Yang et al. (2016) proposed a FCN-based encoder-decoder architecture for object contour detection. The work of Bertasius et al. (2015) first used VGG-based network to locate binary semantic edges and then used deep semantic segmentation networks to obtain category labels. More recently, the work of Yu et al. (2017) proposed an FCN architecture, CASENet, with a novel shared concatenation scheme that fuses low-level features with higher conv layer features. The shared concatenation replicates lower layer features to separately concatenate each channel of the class activation map in the final layer. Then, a K-grouped 1 × 1 conv is performed on fused features to generate a semantic boundary map with K channels, in which the k-th channel represents the edge map for the k-th category. A few recent works Acuna et al. (2019); Yu et al. (2018b) integrated an alignment module into the network to account for noise in the contour labels. Most of these networks are trained in an end-to-end manner, using crossentropy-based objective functions. These objectives treat all boundary pixels equally irrespective of the complexity around them. Unlike existing methods, we use explicit high-frequency augmentation to ConvNet and train it in a curriculum learning scheme that accounts for different levels of pixel complexities with two stages." }, { "heading": "2 THE PROPOSED CURRICULUM LEARNING SCHEME", "text": "Curriculum Learning & Multi-Stage Training: Curriculum learning (CL) or multi-stage training schemes are motivated from the observation that humans and animals seem to learn better when trained with a curriculum like strategy: start with easier tasks and gradually increase the difficulty level of the tasks. A pioneering work of Bengio et al. (2009) introduced CL concepts to machine learning fields. This work proposed a set of CL schemes for the applications of shape recognition and language modeling; demonstrated better performance and faster convergence. This work established curriculum learning as a continuation method. Continuation methods [Allgower & Georg (2012)] start with a smoothed objective function and gradually move to less smoothed functions. In other terms, these methods consider a class of objective functions that can be expressed as,\nCλ(θ) = (1− λ)Co(θ) + λCt(θ) (1)\nwhere Co(θ) is smoother or simple objective and Ct(θ) is the target objective we wish to optimize. There are several ways to choose Co; it can either be the same loss function as Ct but solving the task on simpler examples, or be a proxy task simpler than the target task. In general, λ is a (binary) variable that takes values zero or one. It is set to zero initially and later increases to one. The epoch where it changes from zero to one is referred to as the switch epoch.\nCurriculum Learning in CV: CL-inspired training schemes are recently gaining attention in CV fields. Recently some popular CV architectures leveraged CL based schemes to improve model generalization and training stabilities. In FlowNet 2.0 [Ilg et al. (2017)] for optical flow prediction, simpler training data are fed into the network first and then the more difficult dataset. The object detection framework of Zhang et al. (2016) first trains simpler networks (proposal, refiner-nets) and then trains the final output-net in the end. Here we propose a two-stage CL scheme for object boundary detection methods.\nThe proposed CL based training scheme for learning Object Boundaries: ConvNets for classspecific object boundary detection are in general trained with multi-label cross entropy-based objectives [Yu et al. (2017)].\nCt(θ) = − ∑ k ∑ p ( βYk(p) log Ŷk(p; θ) + (1− β)(1− Yk(p)) log(1− Ŷk(p; θ)) ) (2)\nwhere θ denotes the weights of the network; and p and k represent indices of pixel and class labels respectively. Ŷ and Y represent prediction and groundtruth label maps. β is the percentage of non-edge pixels in the image to account for skewness of sample numbers [Yu et al. (2017)]. This objective treats all the contour pixels equally and does not account for the complexity of the task around them. Here, we consider this as the target objective function, Ct, that we wish to optimize.\nWe start the training, however, with a simpler task Co. We believe the pixels with strong edge strength are easy to localize and semantically identify. Hence, we propose to solve the task around\nthose pixels in the first stage. We take element-wise multiplication between canny edge map of the input and dilated groundtruth label to prepare supervisory signal Z for this stage.\nZ = EI YD (3) where EI is the Canny edge map of image I and YD the dilated groundtruth map. This is as shown in Figure 2e.\nThe objective function for this stage becomes; Co(θ) = − ∑ k ∑ p ( βZk(p) log Ẑk(p; θ) + (1− β)(1− Zk(p)) log(1− Ẑk(p; θ)) ) (4)\nSince we use a dilated version of GT in preparing Z, it also contains some non-object contour pixels. However, these might be refined in the second stage of CL, when trained with Y (Eq 2). Hence, the CL objective function in Eq 1 uses Eq 4 and Eq 2 as initial and target objective functions respectively. In the CL training scheme, we set the switch epoch as T/2, where T is the total number of training epochs." }, { "heading": "3 HIGH FREQUENCY AUGMENTED CONVNET", "text": "The first stage in the proposed CL-based training scheme is about learning to locate a class of highfrequency pixel locations and recognizing their semantic entity. We believe that explicit augmentation of high-frequency contents of the input image to the network will help learn faster. Hence, we design a novel architecture with explicit high-frequency augmentation that can make effective use of our multi-stage training scheme.\nFrequency Decomposition via Wavelet Transform (WT): Here we briefly introduce the main concepts of wavelet decomposition of the images [Stephane (1999)]. The multi-resolution wavelet transform provides localized spatial frequency analysis of images. This transform decomposes the given image into different frequency ranges, thus, permits the isolation of the frequency components introduced by boundaries into certain subbands, mainly in high-frequency subbands. The 2D wavelet decomposition of an image (I ∈ R2M×2N ) is performed by applying 1D low-pass (φ) and high-pass (ψ) filters. This operation results in four decomposed subband images at each level, referred to as low-low (W ll), low-high (W lh), high-low (Whl), and high-high (Whh) wavelet coefficients. For instance, the single-level WT is defined as follows,\nW ll1 (i, j) = ∑ k ∑ l I(2i+ k, 2j + l)φ(k)φ(l)\nW lh1 (i, j) = ∑ k ∑ l I(2i+ k, 2j + l)φ(k)ψ(l)\nWhl1 (i, j) = ∑ k ∑ l I(2i+ k, 2j + l)ψ(k)φ(l)\nWhh1 (i, j) = ∑ k ∑ l I(2i+ k, 2j + l)ψ(k)ψ(l)\n(5)\nAll the convolutions above are performed with stride 2, yielding a down-sampling of factor 2 along each spatial dimension. The WT results in four bands {W ll,W lh,Whl,Whh} ∈ RM×N at first\nlevel. A multi-scale wavelet decomposition successively performs Eq 5 on low-low frequency coefficients {.}ll from fine to coarse resolution. In this sense of resolution, decomposition in multiresolution wavelet analysis is in analogy to the down-sampling steps in ResNet blocks [He et al. (2016)]. Moreover, it is worth noting that, while the low-frequency coefficients {.}ll store local averages of the input data, its high-frequency counterparts, namely {.}lh, {.}hl and {.}hh encode local textures which are vital in recovering sharp boundaries. This motivates us to make use of the high-frequency wavelet coefficients to improve the quality of pixel-level boundary extraction. Throughout this paper, we extensively use the Haar wavelet for its simplicity and effectiveness to boost the performances of the underlying ConvNet. In this scenario, the Haar filters used for decomposition in Eq 5, are given by φ = (0.5, 0.5) and ψ = (0.5,−0.5).\nThe proposed ConvNet: Our architecture is as shown in Figure 3. We now explain the components of the network. ResNet-101 Backbone: Several recent ConvNet architectures including CASENet use ResNet-101 as a basic building block. We use similar architecture with simple concatenation as our backbone architecture. ResNet-101 with five res blocks that uses 4 down-sampling steps (max-pool or stride 2). This results in 4 tensors of conv features at four scales: {Fi}i=1,..,4 Wavelets augmentation: Multi-scale wavelet decomposition of the input image with four levels produce a pyramid of frequency bands. The spatial resolutions of these bands are analogous to conv features at each level. We select high frequency bands at each level of wavelet pyramid and concatenate in channel dimension of the conv features at corresponding level in the ResNet: {Fi ⊕ W lhi ⊕Whli ⊕Whhi }i=1,...,4. Fuse blocks: These Fuse module are designed to learn proper fusion schemes to fuse wavelets to conv features. These are based on 1 × 1 conv blocks that takes in {Fi ⊕W lhi ⊕Whli ⊕Whhi } and produce features to match next block’s input dimensions.\nSkip modules: Similar to CASENet, we design a set of skip connections that pass lower layer features to the classification layer. These skip modules uses up-sampling followed by 1 × 1 conv layers (with 64 filters). After up-sampling, these features from different resolutions are brought back to original input resolution. All these features are concatenated in the channel axis.\ncls module: This is a pixel-level classification module implemented with conv layer with k filters, followed by a sigmoid layer to produce class-specific object boundary maps from concatenated features from different levels of ResNet. For the experiments provided in Section 4, we use k = 1 as we work with the problems of single class object contour extraction." }, { "heading": "4 EXPERIMENTS", "text": "This section provides an extensive evaluation of our boundary extraction model and training scheme with a simulated dataset as well as a challenging real-world dataset. Our simulated dataset is prepared for a thorough ablation study without tolerating long training periods. Our real-world dataset is collected to train the proposed model for road contour extraction from aerial images, that is an essential component for the system we build in Section 5." }, { "heading": "4.1 ABLATION STUDIES WITH MNIST DIGIT-CONTOURS", "text": "" }, { "heading": "4.1.1 EXPERIMENTAL SETUP", "text": "Dataset preparation: The MNIST dataset [Deng (2012)] is popularly known toy dataset in machine learning, originally designed for image classification algorithms. Lately, it has been adapted to work with several tasks such as object detection [multiMNIST Eslami et al. (2016)], color image generation [colorMNIST Metz et al. (2016)], and Spatio-temporal representation learning [movingMNIST Srivastava et al. (2015)]. These adopted datasets have been used to understand the behavior of the models and training processes without tolerating long training periods. Similarly, we also simulate contourMNIST dataset using MNIST digits as objects-of-interest. The MNIST database contains 70000 gray-scale images of handwritten digits of resolution 28×28. The test images (10,000 digits) were kept separate from the training images (60,000 digits). For our study, we simulate 128× 128- sized images with random digits placed on a random background sampled from Pascal-VOC dataset [Everingham et al. (2015)] (as shown in Figure 4). Corresponding pixel-level groundtruth labels for digits contours are also generated for each simulated image. To mimic human labeling noise in the groundtruth of real-world training data, we transform these GT labels (see Figure 4d) with randomly generated tps (thin-plate-splines) transformations. In this process, we prepare a train set of 5000 images and a validation set of 500 images. Training on this dataset for ten epochs takes just around 17 minutes for all the experiments in this section.\nImplementation details: We have implemented all our methods using the PyTorch machine learning framework. We used SGD optimizer with learning rate 1e − 4 for all experiments. Training experiments are done using NVIDIA GTX-1080 GPU with 12GB RAM. For wavelets decomposition, we used pytorch-wavelets1 package. All the models in this section are trained for ten epochs. Switch epoch is set to 5th epoch for CL-based training experiments. We refer our high-frequency augmented ConvNet as CntrNet for brevity in this section.\nEvaluation criteria: We used average precision (AP) as an evaluation metric. Prediction maps are thresholded with a fixed threshold of 0.5 to compute false and true positives. The usual groundtruth labels (Y ) are used for validation. For simplicity, we omitted expensive post-processing steps (such as thinning) and evaluations at optimal-data-scale [ODS Acuna et al. (2019)] during training epochs. Each training experiment is run for five times for statistical significance of the results and observations. Validation accuracy, in terms of AP, is computed for every epoch and plotted in Figure 4. Shaded regions in the plot represent standard deviations in the validation accuracies of different models and training schemes. For final trained models, we also report AP measures at ODS, i.e. maxAP, that finds maximum AP obtained by different thresholds. These values are shown in the brackets, following the label names in the legends of the plots.\n1www.pytorch-wavelets.readthedocs.io" }, { "heading": "4.1.2 ABLATION STUDIES", "text": "How do our proposals (CntrNet+CL) perform compared to the state-of-the-art? We first start comparing the performance of our proposals with a state-of-the-art method, CASENet [Yu et al. (2017)]. Figure 4e illustrates the validation accuracy plots of CASENet and our CntrNet+CL over training epochs. CntrNet+CL seem to converge faster than the CASENet. According to maxAP scores mentioned in the legend of the plot, our network (CntrNet+CL) outperforms CASENet by approximately 6%. It means that CntrNet+CL produces less false alarms than the CASENet. This is also quite evident in the qualitative results provided in Figure 4b-c." }, { "heading": "Is the proposed CL training necessary?", "text": "Here we evaluate how curriculum learning impacts the learning processes in the absence of high-frequency (HF) augmentation. Towards this, we experiment with CntrNet without HF augmentation,CntrNet−. Validation accuracy plots are as shown in Figure 4f when CntrNet− is trained with and without CL-based scheme. The model achieves better generalization performances when trained with CL. It improves validation accuracy by 7.4 % compared to the one trained without CL." }, { "heading": "Is the proposed high-frequency augmentation necessary?", "text": "We also evaluate how high-frequency augmentation impacts the model learning process and performance in the absence of curriculum learning. As seen in Figure 4g, high-frequency augmentation seems to help the model to achieve better performances promptly, starting from Stage 1. However, it seems to be helping the model’s performance by only a small extent (0.6%) in the end.\nHow does high-frequency augmentation impact the first stage of CL? As seen in Figure 4i, the explicit augmentation of high-frequencies seems to be helping stage 1 to converge faster. This is expected because Stage 1 is about identifying the pixels with high frequencies and recognize their class." }, { "heading": "4.2 RESULTS ON AERIAL ROAD CONTOURS DATASET", "text": "In this section, we evaluate the models on a real-world aerial image dataset for the task of extracting road contours. Dataset preparation: Our dataset originally is prepared for geo-parcel alignment task (Section 5) and contains 13,368 aerial image tiles captured over city of Redding, California. Each of these tiles is 4864× 7168 resolution and approximately cover an area of 0.15× 0.2 square miles. Samples of these tiles are shown in Figures 1 and 7. For training and evaluation experiments, we manually label 11 of these aerial image tiles with road contours. This labeling process took approximately 2 hours per tile. Ten tiles are used to prepare a training set, and one tile is left for validation. To prepare our train and val sets, we randomly crop several subregions of 2000 × 2000 from these tiles and resize them to 256× 256. We also use some data augmentation techniques to amplify the scale of the set. The train and val sets contain 21649 and 522 samples, respectively.\nImplementation: The network’s implementation details are similar to above. Here we set batch size as eight and input size is 256 × 256. The models are trained for 20 epochs that take approximately 2 days 18 hours on AWS GPU instance with NVIDIA Tesla K80.\nEvaluation: We report maxAP scores for both CASENet and our model (CntrNet+CL) in Figure 5e. Our method performs with an accuracy of 89.6%, while CASENet is at 75.3% maxAP points. In\nother words, our method outperforms CASENet by nearly 15%. As shown in Figure 5, our results are sharper and accurate compared to CASENet. A result at the tile scale can be seen in Figure 6. We use this model in a real-world application of geo-parcel alignment in the next section." }, { "heading": "5 APPLICATION TO GEO-PARCEL ALIGNMENT", "text": "In this section, we discuss an application of the proposed model, CntrNet+CL (trained for road contour extraction), for aligning geo-parcel data with aerial image tiles. Geo-parcel data is generally used to identify public and private land property boundaries for tax assessment processes. Parcels are shapefiles in the records maintained by local counties and represent latitude-longitude GPS coordinates of the property boundaries. We project these parcel shapes (using perspective projection) onto a geo-tagged coordinate system of the camera with which aerial imagery was captured. This process results in binary contour images as shown in Figure 7c. These contours are ideally expected to match visual property entities in the aerial image of the corresponding region (shown in Figure 7a). However, due to several differences in their collection processes, these two modalities of the data often misalign by a large extent, sometimes, in the order of 10 meters. Figure 7d depicts the misalignment of the original (before alignment) parcel contours overlaid on the aerial image in blue color. This misalignment might lead to wrong property assignments to individuals, thus, incorrect tax assessments. These data modalities of geographical data need to be aligned well before using it to assist the processes of property assignment and tax assessment.\nWe use our CntrNet+CL model trained on the aerial road contours dataset prepared in Section 4.2. Given an aerial image tile, as shown in Figure 7a, we first divide the tile into 12 non-overlapping patches and pass them as a batch to the model. The predictions from the model are stitched back to form a road contour prediction at tile resolution, as shown in Figure 7b. One can pose this alignment problem as an image registration task by considering geo-parcel and road contour images as moving and target images, respectively. We designed an image registration network for the same, and discussions about the registration network is out of the scope of this paper. A few samples of final aligned parcels are overlaid with red color in Figure 7d." }, { "heading": "6 CONCLUSIONS", "text": "In this work, we presented a novel ConvNet with explicit high-frequency augmentation and a new two-stage curriculum learning scheme for class-specific object boundary extraction. Our ablation studies with simulated MNIST digit-contours dataset demonstrated that this explicit high-frequency augmentation helps the model to converge faster. Our high-frequency augmented model, when trained with proposed CL based scheme, outperformed CASENet by nearly 15% on aerial image dataset. We also demonstrated a use-case of developed contour extraction model to align geo-parcel boundaries with roads extracted from aerial imagery." } ]
2,019
null
SP:91fbd1f4774de6619bd92d37e1a1b1e7f2ed96f3
[ "The paper proposes an extension to the Viper[1] method for interpreting and verifying deep RL policies by learning a mixture of decision trees to mimic the originally learned policy. The proposed approach can imitate the deep policy better compared with Viper while preserving verifiability. Empirically the proposed method demonstrates improvement in terms of cumulative reward and misprediction rate over Viper in four benchmark tasks.", "The paper proposes a method (MOET) to distillate a reinforcement learning policy represented by a deep neural network into an ensemble of decision trees. The main objective of this procedure is to obtain an \"interpretable\" and verifiable policy while maintaining the performance of the policy. The authors build over the previously published algorithm Viper (Bastani et al, 2018), which distillates deep policies into a single decision tree using the DAGGER procedures, i.e. alternation of imitation learning of an expert policy and of additional data-sampling from the newly learned policy. In the VIPER algorithm decision trees are chosen because their structured nature allows to formally prove properties of the policy they represent when the environments dynamics are known and expressible in closed form. " ]
Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go. However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings. Viper, a recently proposed technique, constructs a decision tree policy by mimicking the DRL agent. Decision trees are interpretable as each action made can be traced back to the decision rule path that lead to it. However, one global decision tree approximating the DRL policy has significant limitations with respect to the geometry of decision boundaries. We propose MOËT, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions. We propose a training procedure to support non-differentiable decision tree experts and integrate it into imitation learning procedure of Viper. We evaluate our algorithm on four OpenAI gym environments, and show that the policy constructed in such a way is more performant and better mimics the DRL agent by lowering mispredictions and increasing the reward. We also show that MOËT policies are amenable for verification using off-the-shelf automated theorem provers such as Z3.
[]
[ { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of Go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Abhinav Verma", "Vijayaraghavan Murali", "Rishabh Singh", "Pushmeet Kohli", "Swarat Chaudhuri" ], "title": "Programmatically Interpretable Reinforcement Learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Osbert Bastani", "Yewen Pu", "Armando Solar-Lezama" ], "title": "Verifiable reinforcement learning via policy extraction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "Michael I Jordan", "Lei Xu" ], "title": "Convergence results for the EM approach to mixtures of experts architectures", "venue": "Neural networks,", "year": 1995 }, { "authors": [ "Seniha Esen Yuksel", "Joseph N Wilson", "Paul D Gader" ], "title": "Twenty years of mixture of experts", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2012 }, { "authors": [ "Ozan Irsoy", "Olcay Taner Yıldız", "Ethem Alpaydın" ], "title": "Soft decision trees", "venue": "In Proceedings of the 21st International Conference on Pattern Recognition", "year": 2012 }, { "authors": [ "Wenbo Zhao", "Yang Gao", "Shahan Ali Memon", "Bhiksha Raj", "Rita Singh" ], "title": "Hierarchical Routing Mixture of Experts", "venue": "arXiv preprint arXiv:1903.07756,", "year": 2019 }, { "authors": [ "Leonardo De Moura", "Nikolaj Bjørner. Z" ], "title": "An efficient SMT solver", "venue": "In International conference on Tools and Algorithms for the Construction and Analysis of Systems,", "year": 2008 }, { "authors": [ "Zachary C Lipton" ], "title": "The mythos of model interpretability", "venue": "arXiv preprint arXiv:1606.03490,", "year": 2016 }, { "authors": [ "Riccardo Guidotti", "Anna Monreale", "Salvatore Ruggieri", "Franco Turini", "Fosca Giannotti", "Dino Pedreschi" ], "title": "A survey of methods for explaining black box models", "venue": "ACM computing surveys (CSUR),", "year": 2018 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Visualizing higher-layer features of a deep", "venue": null, "year": 2009 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas", "Rory Sayres" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "venue": "arXiv preprint arXiv:1711.11279,", "year": 2017 }, { "authors": [ "Xin Zhang", "Armando Solar-Lezama", "Rishabh Singh" ], "title": "Interpreting neural network judgments via minimal, stable, and symbolic corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Amit Dhurandhar", "Pin-Yu Chen", "Ronny Luss", "Chun-Chen Tu", "Paishun Ting", "Karthikeyan Shanmugam", "Payel Das" ], "title": "Explanations based on the missing: Towards contrastive explanations with pertinent negatives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Riccardo Guidotti", "Anna Monreale", "Salvatore Ruggieri", "Dino Pedreschi", "Franco Turini", "Fosca Giannotti" ], "title": "Local rule-based explanations of black box decision systems", "venue": "arXiv preprint arXiv:1805.10820,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Leo Breiman", "Nong Shang" ], "title": "Born again trees", "venue": "Technical Report,", "year": 1996 }, { "authors": [ "Andrei A. Rusu", "Sergio Gomez Colmenarejo", "Çaglar Gülçehre", "Guillaume Desjardins", "James Kirkpatrick", "Razvan Pascanu", "Volodymyr Mnih", "Koray Kavukcuoglu", "Raia Hadsell" ], "title": "Policy distillation", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Anurag Koul", "Alan Fern", "Sam Greydanus" ], "title": "Learning finite state representations of recurrent policy networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Amy Zhang", "Zachary C. Lipton", "Luis Pineda", "Kamyar Azizzadenesheli", "Anima Anandkumar", "Laurent Itti", "Joelle Pineau", "Tommaso Furlanello" ], "title": "Learning causal state representations of partially observable environments", "venue": null, "year": 2019 }, { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In ICML,", "year": 2004 }, { "authors": [ "Stefan Schaal" ], "title": "Is imitation learning the route to humanoid robots", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 }, { "authors": [ "Andrew G Barto", "Richard S Sutton", "Charles W Anderson" ], "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "venue": "IEEE transactions on systems, man, and cybernetics,", "year": 1983 }, { "authors": [ "Richard S Sutton" ], "title": "Generalization in reinforcement learning: Successful examples using sparse coarse coding", "venue": "In Advances in neural information processing systems,", "year": 1996 }, { "authors": [ "Andrew William Moore" ], "title": "Efficient memory-based learning for robot control", "venue": null, "year": 1990 }, { "authors": [ "OpenAI Gym: CartPole", "Pong", "Acrobot", "Mountaincar. C" ], "title": "CARTPOLE This environment consists of a cart and a rigid pole hinged to the cart, based on the system presented by Barto et al", "venue": "(Barto et al.,", "year": 1983 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Reinforcement Learning (DRL) has achieved many recent breakthroughs in challenging domains such as Go (Silver et al., 2016). While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces, the learned policies are not interpretable, which hinders their use in safety-critical applications.\nSome recent works leverage programs and decision trees as representations for interpreting the learned agent policies. PIRL(Verma et al., 2018) uses program synthesis to generate a program in a Domain-Specific Language (DSL) that is close to the DRL agent policy. The design of the DSL with desired operators is a tedious manual effort and the enumerative search for synthesis is difficult to scale for larger programs. In contrast, Viper (Bastani et al., 2018) learns a Decision Tree (DT) policy by mimicking the DRL agent, which not only allows for a general representation for different policies, but also allows for verification of these policies using integer linear programming solvers.\nViper uses the DAGGER (Ross et al., 2011) imitation learning approach to collect state action pairs for training the student DT policy given the teacher DRL policy. It modifies the DAGGER algorithm to use the Q-function of teacher policy to prioritize states of critical importance during learning. However, learning a single DT for the complete policy leads to some key shortcomings such as i) less faithful representation of original agent policy measured by the number of mispredictions, ii) lower overall performance (reward), and iii) larger DT sizes that make them harder to interpret.\nIn this paper, we present MOËT (Mixture of Expert Trees), a technique based on Mixture of Experts (MOE) (Jacobs et al., 1991; Jordan and Xu, 1995; Yuksel et al., 2012), and reformulate its learning procedure to support DT experts. MOE models can typically use any expert as long as it is a differentiable function of model parameters, which unfortunately does not hold for DTs. Similar to MOE training with Expectation-Maximization (EM) algorithm, we first observe that MOËT can be trained by interchangeably optimizing the weighted log likelihood for experts (independently from one another) and optimizing the gating function with respect to the obtained experts. Then, we propose a procedure for DT learning in the specific context of MOE. To the best of our knowledge we are first to combine standard non-differentiable DT experts, which are interpretable, with MOE\nmodel. Existing combinations which rely on differentiable tree or treelike models, such as soft decision trees (Irsoy et al., 2012) and hierarchical mixture of experts (Zhao et al., 2019) are not interpretable.\nWe adapt the imitation learning technique of Viper to use MOËT policies instead of DTs. MOËT creates multiple local DTs that specialize on different regions of the input space, allowing for simpler (shallower) DTs that more accurately mimic the DRL agent policy within their regions, and combines the local trees into a global policy using a gating function. We use a simple and interpretable linear model with softmax function as the gating function, which returns a distribution over DT experts for each point in the input space. While standard MOE uses this distribution to average predictions of DTs, we also consider selecting just one most likely expert tree to improve interpretability. While decision boundaries of Viper DT policies must be axis-perpendicular, the softmax gating function supports boundaries with hyperplanes of arbitrary orientations, allowing MOËT to more faithfully represent the original policy.\nWe evaluate our technique on four different environments: CartPole, Pong, Acrobot, and Mountaincar. We show that MOËT achieves significantly better rewards and lower misprediction rates with shallower trees. We also visualize the Viper and MOËT policies for Mountaincar, demonstrating the differences in their learning capabilities. Finally, we demonstrate how a MOËT policy can be translated into an SMT formula for verifying properties for CartPole game using the Z3 theorem prover (De Moura and Bjørner, 2008) under similar assumptions made in Viper.\nIn summary, this paper makes the following key contributions: 1) We propose MOËT, a technique based on MOE to learn mixture of expert decision trees and present a learning algorithm to train MOËT models. 2) We use MOËT models with a softmax gating function for interpreting DRL policies and adapt the imitation learning approach used in Viper to learn MOËT models. 3) We evaluate MOËT on different environments and show that it leads to smaller, more faithful, and performant representations of DRL agent policies compared to Viper while preserving verifiability." }, { "heading": "2 RELATED WORK", "text": "Interpretable Machine Learning: In numerous contexts, it is important to understand and interpret the decision making process of a machine learning model. However, interpretability does not have a unique definition that is widely accepted. Accoding to Lipton (Lipton, 2016), there are several properties which might be meant by this word and we adopt the one which Lipton names transparency which is further decomposed to simulability, decomposability, and algorithmic transparency. A model is simulable if a person can in reasonable time compute the outputs from given inputs and in that way simulate the model’s inner workings. That holds for small linear models and small decision trees (Lipton, 2016). A model is decomposable if each part of a models admits an intuitive explanation, which is again the case for simple linear models and decision trees (Lipton, 2016). Algorithmic transparency is related to our understanding of the workings of the training algorithm. For instance, in case of linear models the shape of the error surface and properties of its unique minimum towards which the algorithm converges are well understood (Lipton, 2016). MOËT models focus on transparency (as we discuss at the end of Section 5).\nExplainable Machine Learning: There has been a lot of recent interest in explaining decisions of black-box models (Guidotti et al., 2018a; Doshi-Velez and Kim, 2017). For image classification, activation maximization techniques can be used to sample representative input patterns (Erhan et al., 2009; Olah et al., 2017). TCAV (Kim et al., 2017) uses human-friendly high-level concepts to associate their importance to the decision. Some recent works also generate contrastive robust explanations to help users understand a classifier decision based on a family of neighboring inputs (Zhang et al., 2018; Dhurandhar et al., 2018). LORE (Guidotti et al., 2018b) explains behavior of a blackbox model around an input of interest by sampling the black-box model around the neighborhood of the input, and training a local DT over the sampled points. Our model presents an approach that combines local trees into a global policy.\nTree-Structured Models: Irsoy et al. (Irsoy et al., 2012) propose a a novel decision tree architecture with soft decisions at the internal nodes where both children are chosen with probabilities given by a sigmoid gating function. Similarly, binary tree-structured hierarchical routing mixture of experts (HRME) model, which has classifiers as non-leaf node experts and simple regression models as leaf node experts, were proposed in (Zhao et al., 2019). Both models are unfortunately not interpretable.\nKnowledge Distillation and Model Compression: We rely on ideas already explored in fields of model compression (Bucilu et al., 2006) and knowledge distillation (Hinton et al., 2015). The idea is to use a complex well performing model to facilitate training of a simpler model which might have some other desirable properties (e.g., interpretability). Such practices have been applied to approximate decision tree ensemble by a single tree (Breiman and Shang, 1996), but this is different from our case, since we approximate a neural network. In a similar fashion a neural network can be used to train another neural network (Furlanello et al., 2018), but neural networks are hard to interpret and even harder to formally verify, so this is also different from our case. Such practices have also been applied in the field of reinforcement learning in knowledge and policy distillation (Rusu et al., 2016; Koul et al., 2019; Zhang et al., 2019), which are similar in spirit to our work, and imitation learning (Bastani et al., 2018; Ross et al., 2011; Abbeel and Ng, 2004; Schaal, 1999), which provide a foundation for our work." }, { "heading": "3 MOTIVATING EXAMPLE: GRIDWORLD", "text": "We now present a simple motivating example to showcase some of the key differences between Viper and MOËT approaches. Consider the N ×N Gridworld problem shown in Figure 1a (for N = 5). The agent is placed at a random position in a grid (except the walls denoted by filled rectangles) and should find its way out. To move through the grid the agent can choose to go up, left, right or down at each time step. If it hits the wall it stays in the same position (state). State is represented using two integer values (x, y coordinates) which range from (0, 0)—bottom left to (N − 1, N − 1)—top right. The grid can be escaped through either left doors (left of the first column), or right doors (right of the last column). A negative reward of −0.1 is received for each agent action (negative reward encourages the agent to find the exit as fast as possible). An episode finishes as soon as an exit is reached or if 100 steps are made whichever comes first.\nThe optimal policy (π∗) for this problem consists of taking the left (right resp.) action for each state below (above resp.) the diagonal. We used π∗ as a teacher and used imitation learning approach of Viper to train an interpretable DT policy that mimics π∗. The resulting DT policy is shown in Figure 1b. The DT partitions the state space (grid) using lines perpendicular to x and y axes, until it separates all states above diagonal from those below. This results in a DT of depth 3 with 9 nodes. On the other hand, the policy learned by MOËT is shown in Figure 1c. The MOËT model with 2 experts learns to partition the space using the line defined by a linear function 1.06x + 1.11y = 4 (roughly the diagonal of the grid). Points on the different sides of the line correspond to two different experts which are themselves DTs of depth 0 always choosing to go left (below) or right (above).\nWe notice that DT policy needs much larger depth to represent π∗ while MOËT can represent it as only one decision step. Furthermore, with increasing N (size of the grid), complexity of DT will grow, while MOËT complexity stays the same; we empirically confirm this for N = [5, 10]. For N = 5, 6, 7, 8, 9, 10 DT depths are 3, 4, 4, 4, 4, 5 and number of nodes are 9, 11, 13, 15, 17, 21 respectively. In contrast, MOËT models of same complexity and structure as the one shown in Figure 1c are learned for all values of N (models differ in the learned partitioning linear function)." }, { "heading": "4 BACKGROUND", "text": "In this section we provide description of two relevant methods we build upon: (1) Viper, an approach for interpretable imitation learning, and (2) MOE learning framework.\nViper. Viper algorithm (included in appendix) is an instance of DAGGER imitation learning approach, adapted to prioritize critical states based on Q-values. Inputs to the Viper training algorithm are (1) environment e which is an finite horizon (T -step) Markov Decision Process (MDP) (S,A, P,R) with states S, actions A, transition probabilities P : S × A× S → [0, 1], and rewards R : S → R; (2) teacher policy πt : S → A; (3) its Q-function Qπt : S × A → R and (4) number of training iterations N . Distribution of states after T steps in environment e using a policy π is d(π)(e) (assuming randomly chosen initial state). Viper uses the teacher as an oracle to label the data (states with actions). It initially uses teacher policy to sample trajectories (states) to train a student (DT) policy. It then uses the student policy to generate more trajectories. Viper samples training points from the collected dataset D giving priority to states s having higher importance I(s), where I(s) = maxa∈AQ\nπt(s, a)−mina∈AQπt(s, a). This sampling of states leads to faster learning and shallower DTs. The process of sampling trajectories and training students is repeated for number of iterations N , and the best student policy is chosen using reward as the criterion.\nMixture of Experts. MOE is an ensemble model (Jacobs et al., 1991; Jordan and Xu, 1995; Yuksel et al., 2012) that consists of expert networks and a gating function. Gating function divides the input (feature) space into regions for which different experts are specialized and responsible. MOE is flexible with respect to the choice of expert models as long as they are differentiable functions of model parameters (which is not the case for DTs).\nIn MOE framework, probability of outputting y ∈ IRm given an input x ∈ IRn is given by:\nP (y|x, θ) = E∑ i=1 P (i|x, θg)P (y|x, θi) = E∑ i=1 gi(x, θg)P (y|x, θi) (1)\nwhereE is the number of experts, gi(x, θg) is the probability of choosing the expert i (given input x), P (y|x, θi) is the probability of expert i producing output y (given input x). Learnable parameters are θ = (θg, θe), where θg are parameters of the gating function and θe = (θ1, θ2, ..., θE) are parameters of the experts. Gating function can be modeled using a softmax function over a set of linear models. Let θg consist of parameter vectors (θg1, . . . , θgE), then the gating function can be defined as gi(x, θg) = exp(θ T gix)/ ∑E j=1 exp(θ T gjx) .\nIn the case of classification, an expert i outputs a vector yi of length C, where C is the number of classes. Expert i associates a probability to each output class c (given by yic) using the gating function. Final probability of a class c is a gate weighted sum of yic for all experts i ∈ 1, 2, ..., E. This creates a probability vector y = (y1, y2, ..., yC), and the output of MOE is argmaxi yi.\nMOE is commonly trained using EM algorithm, where instead of direct optimization of the likelihood one performs optimization of an auxiliary function L̂ defined in a following way. Let z denote the expert chosen for instance x. Then joint likelihood of x and z can be considered. Since z is not observed in the data, log likelihood of samples (x, z,y) cannot be computed, but instead expected log likelihood can be considered, where expectation is taken over z. Since the expectation has to rely on some distribution of z, in the iterative process, the distribution with respect to the current estimate of parameters θ is used. More precisely function L̂ is defined by (Jordan and Xu, 1995):\nL̂(θ, θ(k)) = Ez[logP (x, z,y)|x,y, θ(k)] = ∫ P (z|x,y, θ(k)) logP (x, z,y)dz (2)\nwhere θ(k) is the estimate of parameters θ in iteration k. Then, for a specific sample D = {(xi,yi) | i = 1, . . . , N}, the following formula can be derived (Jordan and Xu, 1995):\nL̂(θ, θ(k)) = N∑ i=1 E∑ j=1 h (k) ij log gj(xi, θg) + N∑ i=1 E∑ j=1 h (k) ij logP (yi|xi, θj) (3)\nwhere it holds\nh (k) ij = gj(xi, θ (k) g )P (yi|xi, θ(k)j )∑E\nl=1 gl(xi, θ (k) g )P (yi|xi, θ(k)l )\n(4)" }, { "heading": "5 MIXTURE OF EXPERT TREES", "text": "In this section we explain the adaptation of original MOE model to mixture of decision trees, and present both training and inference algorithms.\nConsidering that coefficients h(k)ij (Eq. 4) are fixed with respect to θ and that in Eq. 3 the gating part (first double sum) and each expert part depend on disjoint subsets of parameters θ, training can be carried out by interchangeably optimizing the weighted log likelihood for experts (independently from one another) and optimizing the gating function with respect to the obtained experts. The training procedure for MOËT, described by Algorithm 1, is based on this observation. First, the parameters of the gating function are randomly initialized (line 2). Then the experts are trained one by one. Each expert j is trained on a dataset Dw of instances weighted by coefficients h (k) ij (line 5), by applying specific DT learning algorithm (line 6) that we adapted for MOE context (described below). After the experts are trained, an optimization step is performed (line 7) in order to increase the gating part of Eq. 3. At the end, the parameters are returned (line 8).\nOur tree learning procedure is as follows. Our technique modifies original MOE algorithm in that it uses DTs as experts. The fundamental difference with respect to traditional model comes from the fact that DTs do not rely on explicit and differentiable loss function which can be trained by gradient descent or Newton’s methods. Instead, due to their discrete structure, they rely on a specific greedy training procedure. Therefore, the training of DTs has to be modified in order to take into account the attribution of instances to the experts given by coefficients h(k)ij , sometimes called responsibility of expert j for instance i. If these responsibilities were hard, meaning that each instance is assigned to strictly one expert, they would result in partitioning the feature space into disjoint regions belonging to different experts. On the other hand, soft responsibilities are fractionally distributing each instance to different experts. The higher the responsibility of an expert j for an instance i, the higher the influence of that instance on that expert’s training. In order to formulate this principle, we consider which way the instance influences construction of a tree. First, it affects the impurity measure computed when splitting the nodes and second, it influences probability estimates in the leaves of the tree. We address these two issues next.\nA commonly used impurity measure to determine splits in the tree is the Gini index. Let U be a set of indices of instances assigned to the node for which the split is being computed and DU set of corresponding instances. Let categorical outcomes of y be 1, . . . , C and for l = 1, . . . , C denote pl fraction of assigned instances for which it holds y = l. More formally pl = ∑ i∈U I[yi=l]\n|U | , where I denotes indicator function of its argument expression and equals 1 if the expression is true. Then the Gini index G of the set DU is defined by: G(p1, . . . , pC) = 1− ∑C l=1 p 2 l . Considering that the assignment of instances to experts are fractional as defined by responsibility coefficients h(k)ij (which are provided to tree fitting function as weights of instances computed in line 5 of the algorithm), this definition has to be modified in that the instances assigned to the node should not be counted, but instead, their weights should be summed. Hence, we propose the following definition:\np̂l =\n∑ i∈U I[yi = l]h\n(k) ij∑\ni∈U h (k) ij\n(5)\nand compute the Gini index for the set DU as G(p̂1, . . . , p̂C). Similar modification can be performed for other impurity measures relying on distribution of outcomes of a categorical variable, like entropy. Note that while the instance assignments to experts are soft, instance assignments to nodes within an expert are hard, meaning sets of instances assigned to different nodes are disjoint. Probability estimate for y in the leaf node is usually performed by computing fractions of instances belonging to each class. In our case, the modification is the same as the one presented by Eq. 5. That way, estimates of probabilities P (y|x, θ(k)j ) needed by MOE are defined. In Algorithm 1, function fit tree performs decision tree training using the above modifications.\nWe consider two ways to perform inference with respect to the obtained model. First one which we call MOËT, is performed by maximizing P (y|x, θ) with respect to y where this probability is defined by Eq. 1. The second way, which we call MOËTh, performs inference as argmaxy P (y|x, θargmaxj gj(x,θg)), meaning that we only rely on the most probable expert.\nAlgorithm 1 MOËT training.\n1: procedure MOËT (DATA {(xi,yi) | i = 1, . . . , N}, EPOCHS NE , NUMBER OF EXPERTS E) 2: θg ← initialize() 3: for e← 1 to NE do 4: for j ← 1 to E do 5: Dw ← {( xi,yi,\ngj(xi,θg)P (yi|xi,θj)∑E k=1 gk(xi,θg)P (yi|xi,θk)\n) | i = 1, . . . , N } 6: θj ← fit tree(Dw) 7: θg ← θg + λ∇θ′ ∑N i=1 ∑E j=1 [ gj(xi,θg)P (yi|xi,θj)∑E\nk=1 gk(xi,θg)P (yi|xi,θk) log gj(xi, θ\n′) ]\n8: return θg, (θ1, . . . , θE)\nAdaptation of MOËT to imitation learning. We integrate MOËT model into imitation learning approach of Viper by substituting training of DT with the MOËT training procedure.\nExpressiveness. Standard decision trees make their decisions by partitioning the feature space into regions which have borders perpendicular to coordinate axes. To approximate borders that are not perpendicular to coordinate axes very deep trees are usually necessary. MOËTh mitigates this shortcoming by exploiting hard softmax partitioning of the feature space using borders which are still hyperplanes, but need not be perpendicular to coordinate axes (see Section 3). This improves the expressiveness of the model.\nInterpretability and Verifiability. A MOËTh model is a combination of a linear model and several decision tree models. For interpretability which is preserved in Lipton’s sense of transparency, it is important that a single DT is used for each prediction (instead of weighted average). Simultability of MOËTh consisting of DT and linear models is preserved because our models are small (2≤ depth ≤ 10) and we do not use high dimensional features (Lipton, 2016), so a person can easily simulate the model. Similarly, decomposability is preserved because simple linear models without heavily engineered features and decision trees are decomposable (Lipton, 2016) and MOËTh is a simple combination of the two. Finally, algorithmic transparency is achieved because MOËT training relies on DT training for the experts and linear model training for the gate, both of which are well understood. However, the alternating refinement of initial feature space partitioning and experts makes the procedure more complicated, so our algorithmic transparency is partially achieved. Importantly, we define a well-founded translation of MOËTh models to SMT formulas, which opens a new range of possibilities for interpreting and validating the model using automated reasoning tools. SMT formulas provide a rich means of logical reasoning, where a user can ask the solver questions such as: “On which inputs do the two models differ?”, or “What is the closest input to the given input on which model makes a different prediction?”, or “Are the two models equivalent?”, or “Are the two models equivalent in respect to the output class C?”. Answers to these and similar questions can help better understand and compare models in a rigorous way. Also note that our symbolic reasoning of the gating function and decision trees allows us to construct SMT formulas that are readily handled by off-the-shelf tools, whereas direct SMT encodings of neural networks do not scale for any reasonably sized network because of the need for non-linear arithmetic reasoning." }, { "heading": "6 EVALUATION", "text": "We now compare MOËT and Viper on four OpenAI Gym environments: CartPole, Pong, Acrobot and Mountaincar. For DRL agents, we use policy gradient model in CartPole, in other environments we use a DQN (Mnih et al., 2015) (training parameters provided in appendix). The rewards obtained by the agents on CartPole, Pong, Acrobot and Mountaincar are 200.00, 21.00,−68.60 and−105.27, respectively (higher reward is better). Rewards are averaged across 100 runs (250 in CartPole).\nComparison of MOËT, MOËTh, and Viper policies. For CartPole, Acrobot, and Mountaincar environments, we train Viper DTs with maximum depths of {1, 2, 3, 4, 5}, while in the case of Pong we use maximum depths of {4, 6, 8, 10} as the problem is more complex and requires deeper trees. For experts in MOËT policies we use the same maximum depths as in Viper (except for Pong for which we use depths 1 to 9) and we train the policies for 2 to 8 experts (in case of Pong we train for {2, 4, 8} experts). We train all policies using 40 iterations of Viper algorithm, and choose the\nbest performing policy in terms of rewards (and lower misprediction rate in case of equal rewards). We use two criteria to compare policies: rewards and mispredictions (number of times the student performs an action different from what a teacher would do). High reward indicates that the student learned more crucial parts of the teacher’s policy, while a low misprediction rate indicates that in most cases student performs the same action as the teacher. In order to measure mispredictions, we run the student for number of runs, and compare actions it took to the actions teacher would perform.\nTo ensure comparable depths for evaluating Viper and MOËT models while accounting for the different number of experts in MOËT, we introduce the notion of effective depth of a MOËT model as dlog2(E)e+D, where E denotes the number of experts and D denotes the depth of each expert. Table 1 compares the performance of Viper, MOËT and MOËTh. The first column shows the depth of Viper decision trees and the corresponding effective depth for MOËT, rewards and mispredictions are shown in R and M columns resp. We show results of the best performing MOËT configuration for a given effective depth chosen based on average results for rewards and mispredictions, where e.g. E3:D2 denotes 3 experts with DTs of depth 2. All results shown are averaged across 10 runs1.\nFor CartPole, Viper, MOËT and MOËTh all achieve perfect reward (200) with depths of 2 and greater. More interestingly, for depth 2 MOËT and MOËTh obtain significantly lower average misprediction rates of 0.84% and 0.91% respectively compared to 16.65% for Viper. Even for larger depths, the misprediction rates for MOËT and MOËTh remain significantly lower. For Pong, we observe that MOËT and MOËTh consistently outperform Viper for all depths in terms of rewards and mispredictions, whereas MOËT and MOËTh have similar performance. For Acrobot, we similarly notice that both MOËT and MOËTh achieve consistently better rewards compared to Viper for all depths. Moreover, the misprediction rates are also significantly lower for MOËT and MOËTh in majority of the cases. Finally, for Mountaincar as well, we observe that MOËT and MOËTh\n1except for Pong which we run for 7 times because of high computational cost.\nboth consistently outperform Viper with significantly higher rewards and lower misprediction rates. Moreover, in both of these environments, we observe that both MOËT and MOËTh achieve comparable reward and misprediction rates. Additional results are presented in appendix.\nAnalyzing the learned Policies. We analyze the learned student policies (Viper and MOËTh) by visualizing their state-action space, the differences between them, and differences with the teacher policy. We use the Mountaincar environment for this analysis because of the ease of visualizing its 2-dimensional state space comprising of car position (p) and car velocity (v) features, and 3 allowed actions left, right, and neutral. We visualize DRL, Viper and MOËTh policies in Figure 2, showing the actions taken in different parts of the state space (additional visualizations are in appendix).\nThe state space is defined by feature bounds p ∈ [−1.2, 0.6] and v ∈ [−0.07, 0.07], which represent sets of allowed feature values in Mountaincar. We sample the space uniformly with a resolution 200 × 200. The actions left, neutral, and right are colored in green, yellow, and blue, respectively. Recall that MOËTh can cover regions whose borders are hyperplanes of arbitrary orientation, while Viper, i.e. DT can only cover regions whose borders are perpendicular to coordinate axes. This manifests in MOËTh policy containing slanted borders in yellow and green regions to capture more precisely the geometry of DRL policy, while the Viper policy only contains straight borders.\nFurthermore, we visualize mispredictions for Viper and MOËTh policies. While in previous section we calculated mispredictions by using student policy for playing the game, in this analysis we visualize mispredictions across the whole state space by sampling. Note that in some states (critical states) it is more important to get the action right, while in other states choosing non-optimal action does not affect the overall score much. Viper authors make use of this observation to weight states by their importance, and they use difference between Q values of optimal and non-optimal actions as a proxy for calculating how important (critical) state is. Importance score is calculated as follows: I(s) = maxa∈AQ(s, a)−mina∈AQ(s, a), whereQ(s, a) denotes theQ value of action a in state s, andA is a set of all possible actions. Using I function we weight mispredictions by their importance.\nWe create a vector i consisting of importance scores for sampled points, and normalize it to range [0, 1]. We also create a binary vector z which is 1 in the case of misprediction (student policy decision is different from DRL decision) and 0 otherwise. We visualize m = z i (element-wise multiplication), where higher value indicates misprediction of higher importance and is denoted by a red color of higher intensity. Such normalized mispredictions (m) for Viper and MOËTh policies\nare shown in Figure 2d and Figure 2e respectively. We can observe that the MOËTh policy has fewer high intensity regions leading to fewer overall mispredictions. To provide a quantitative difference between the mispredictions of two policies, we compute M = ( ∑ jmj/ ∑ j ij) · 100, which is measure in bounds [0, 100] such that its value is 0 in the case of no mispredictions, and 100 in the case of all mispredictions. For the policies shown in Figure 2d and Figure 2e, we obtain M = 15.51 for Viper and M = 11.78 for MOËTh policies. We also show differences in mispredictions between Viper and MOËTh (Figure 2f), by subtracting the m vector of MOËTh from the m vector of Viper. The positive values are shown in blue and the negative values are shown in red. The higher intensity blue regions denote states where MOËTh policy gets more important action right and Viper does not (similarly vice versa for high intensity red regions).\nTranslating MOËT to SMT. We now show the translation of MOËT policy to SMT constraints for verifying policy properties. We present an example translation of MOËT policy on CartPole environment with the same property specification that was proposed for verifying Viper policies (Bastani et al., 2018). The goal in CartPole is to keep the pole upright, which can be encoded as a formula: ψ ≡ s0 ∈ S0 ∧ ∧∞ t=1 |φ(ft(st−1, π(st−1))| ≤ y0,\nwhere si represents state after i steps, φ is the deviation of pole from the upright position. In order to encode this formula it is necessary to encode the transition function ft(s, a) which models environment dynamics: given a state and action it returns the next state of the environment. Also, it is necessary to encode the policy function π(s) that for a given state returns action to perform. There are two issues with verifying ψ: (1) infinite time horizon; and (2) the nonlinear transition function ft. To solve this problem, Bastani et al. (2018) use a finite time horizon Tmax = 10 and linear approximation of the dynamics and we make the same assumptions.\nTo encode π(s) we need to translate both the gating function and DT experts to logical formulas. Since the gating function in MOËTh uses exponential function, it is difficult to encode the function directly in Z3 as SMT solvers do not have efficient decision procedures to solve non-linear arithmetic. The direct encoding of exponentiation therefore leads to prohibitively complex Z3 formulas. We exploit the following simplification of gating function that is sound when hard prediction is used:\ne = argmax i ( exp(θTgix)∑E j=1 exp(θ T gjx) ) = argmax i (exp(θTgix)) = argmax i (θTgix)\nFirst simplification is possible since the denominators for gatings of all experts are same, and second simplification is due to the monotonicity of the exponential function. For encoding DTs we use the same encoding as in Viper. To verify that ψ holds we need to show that ¬ψ is unsatisfiable. We run the verification with our MOËTh policies and show that ¬ψ is indeed unsatisfiable. To better understand the scalability of our verification procedure, we report on the verification times needed to verify policies for different number of experts and different expert depths in Figure 3. We observe that while MOËTh policies with 2 experts take from 2.6s to 8s for verification, the verification times for 8 experts can go up to as much as 319s. This directly corresponds to the complexity of the logical formula obtained with an increase in the number of experts." }, { "heading": "7 CONCLUSION", "text": "We introduced MOËT, a technique based on MOE with expert decision trees and presented a learning algorithm to train MOËT models. We then used MOËT models for interpreting DRL agent policies, where different local DTs specialize on different regions of input space and are combined into a global policy using a gating function. We showed that MOËT models lead to smaller, more faithful and performant representation of DRL agents compared to previous state-of-the-art approaches like Viper while still maintaining interpretability and verifiability." }, { "heading": "B DRL AGENT TRAINING PARAMETERS", "text": "Here we present parameters we used to train DRL agents for different environments. For CartPole, we use policy gradient model as used in Viper. While we use the same model, we had to retrain it from scratch as the trained Viper agent was not available. For Pong, we use a deep Q-network (DQN) network (Mnih et al., 2015), and we use the same model as in Viper, which originates from OpenAI baselines (OpenAI Baselines). For Acrobot and Mountaincar, we implement our own version of dueling DQN network following (Wang et al., 2015). We use 3 hidden layers with 15 neurons in each layer. We set the learning rate to 0.001, batch size to 30, step size to 10000 and number of epochs to 80000. We checkpoint a model every 5000 steps and pick the best performing one in terms of achieved reward." }, { "heading": "C ENVIRONMENTS", "text": "In this section we provide a brief description of environments we used in our experiments. We used four environments from OpenAI Gym: CartPole, Pong, Acrobot and Mountaincar.\nC.1 CARTPOLE\nThis environment consists of a cart and a rigid pole hinged to the cart, based on the system presented by Barto et al. (Barto et al., 1983). At the beginning pole is upright, and the goal is to prevent it from falling over. Cart is allowed to move horizontally within predefined bounds, and controller chooses to apply either left or right force to the cart. State is defined with four variables: x (cart position), ẋ (cart velocity), θ (pole angle), and θ̇ (pole angular velocity). Game is terminated when the absolute value of pole angle exceeds 12◦, cart position is more than 2.4 units away from the center, or after 200 successful steps; whichever comes first. In each step reward of +1 is given, and the game is considered solved when the average reward is over 195 in over 100 consecutive trials.\nC.2 PONG\nThis is a classical Atari game of table tennis with two players. Minimum possible score is −21 and maximum is 21.\nC.3 ACROBOT\nThis environment is analogous to a gymnast swinging on a horizontal bar, and consists of a two links and two joins, where the joint between the links is actuated. The environment is based on the system presented by Sutton (Sutton, 1996). Initially both links are pointing downwards, and the goal is to swing the end-point (feet) above the bar for at least the length of one link. The state consists of six variables, four variables consisting of sin and cos values of the joint angles, and two variables for angular velocities of the joints. The action is either applying negative, neutral, or positive torque on the joint. At each time step reward of −1 is received, and episode is terminated upon successful\nreaching the height, or after 200 steps, whichever comes first. Acrobot is an unsolved environment in that there is no reward limit under which is considered solved, but the goal is to achieve high reward.\nC.4 MOUNTAINCAR\nThis environment consists of a car positioned between two hills, with a goal of reaching the hill in front of the car. The environment is based on the system presented by Moore (Moore, 1990). Car can move in a one-dimensional track, but does not have enough power to reach the hill in one go, thus it needs to build momentum going back and forth to finally reach the hill. Controller can choose left, right or neutral action to apply left, right or no force to the car. State is defined by two variables, describing car position and car velocity. In each step reward of −1 is received, and episode is terminated upon reaching the hill, or after 200 steps, whichever comes first. The game is considered solved if average reward over 100 consecutive trials is no less than −110." }, { "heading": "D ADDITIONAL VISUALIZATIONS", "text": "In this section we provide visualization of a gating function. Figure 4 shows how gating function partitions the state space for which different experts specialize. Gatings of MOËTh policy with 4 experts and depth 1 are shown." }, { "heading": "E ADDITIONAL TABLES", "text": "Table 2 shows results similar to Table 1, but here in addition to averaging results across multiple trained models, it averages results across multiple MOËT configurations that have the same effective depth.\nTable 3 shows the results of best performing DRL, MOËT and MOËTh models on the evaluation subjects." }, { "heading": "1 181.76 30.43%", "text": "" }, { "heading": "2 200.00 16.65%", "text": "" }, { "heading": "3 200.00 11.04%", "text": "" }, { "heading": "4 200.00 6.87%", "text": "" }, { "heading": "5 200.00 5.89%", "text": "" }, { "heading": "2 1 200.00 0.84%", "text": "" }, { "heading": "2 2 200.00 1.16%", "text": "" }, { "heading": "2 3 200.00 1.04%", "text": "" }, { "heading": "2 4 200.00 1.58%", "text": "" }, { "heading": "2 5 200.00 2.44%", "text": "" }, { "heading": "3 1 200.00 0.66%", "text": "" }, { "heading": "3 2 200.00 0.92%", "text": "" }, { "heading": "3 3 200.00 0.93%", "text": "" }, { "heading": "3 4 200.00 1.37%", "text": "" }, { "heading": "3 5 200.00 2.37%", "text": "" }, { "heading": "4 1 200.00 0.80%", "text": "" }, { "heading": "4 2 200.00 0.97%", "text": "" }, { "heading": "4 3 200.00 0.96%", "text": "" }, { "heading": "4 4 200.00 1.53%", "text": "" }, { "heading": "4 5 199.96 2.71%", "text": "" }, { "heading": "5 1 200.00 0.92%", "text": "" }, { "heading": "5 2 200.00 1.02%", "text": "" }, { "heading": "5 3 200.00 1.26%", "text": "" }, { "heading": "5 4 200.00 1.97%", "text": "" }, { "heading": "5 5 200.00 3.01%", "text": "" }, { "heading": "6 1 200.00 0.99%", "text": "" }, { "heading": "6 2 200.00 1.27%", "text": "" }, { "heading": "6 3 200.00 1.17%", "text": "" }, { "heading": "6 4 200.00 1.97%", "text": "" }, { "heading": "6 5 200.00 2.68%", "text": "" }, { "heading": "7 1 200.00 0.93%", "text": "" }, { "heading": "7 2 200.00 1.07%", "text": "" }, { "heading": "7 3 200.00 1.64%", "text": "" }, { "heading": "7 4 200.00 2.67%", "text": "" }, { "heading": "7 5 200.00 2.91%", "text": "" }, { "heading": "8 1 200.00 1.29%", "text": "" }, { "heading": "8 2 200.00 1.26%", "text": "" }, { "heading": "8 3 200.00 1.57%", "text": "" }, { "heading": "8 4 200.00 2.23%", "text": "" }, { "heading": "8 5 200.00 3.27%", "text": "" }, { "heading": "F ABLATION RESULTS", "text": "In this section we show results for all DT depths and numbers of experts used for training Viper and MOËT policies. Average mispredictions and rewards are shown for all configurations. Tables 4,5,6 show results for CartPole. Tables 7,8,9 show results for Pong. Tables 10,11,12 show results for Acrobot. Tables 13,14,15 show results for Mountaincar." }, { "heading": "8 5 200.00 2.84%", "text": "" }, { "heading": "8 4 200.00 2.57%", "text": "" }, { "heading": "8 3 200.00 1.70%", "text": "" }, { "heading": "8 2 200.00 1.26%", "text": "" }, { "heading": "8 1 200.00 1.12%", "text": "" }, { "heading": "7 5 200.00 3.30%", "text": "" }, { "heading": "7 4 200.00 2.33%", "text": "" }, { "heading": "7 3 200.00 1.36%", "text": "" }, { "heading": "7 2 200.00 1.59%", "text": "" }, { "heading": "7 1 200.00 0.90%", "text": "" }, { "heading": "6 5 200.00 2.95%", "text": "" }, { "heading": "6 4 200.00 2.10%", "text": "" }, { "heading": "6 3 200.00 1.40%", "text": "" }, { "heading": "6 2 200.00 1.02%", "text": "" }, { "heading": "6 1 200.00 0.87%", "text": "" }, { "heading": "5 5 200.00 3.17%", "text": "" }, { "heading": "5 4 200.00 1.95%", "text": "" }, { "heading": "5 3 200.00 1.31%", "text": "" }, { "heading": "5 2 200.00 0.98%", "text": "" }, { "heading": "5 1 200.00 0.92%", "text": "" }, { "heading": "4 5 200.00 2.58%", "text": "" }, { "heading": "4 4 200.00 1.86%", "text": "" }, { "heading": "4 3 200.00 1.15%", "text": "" }, { "heading": "4 2 200.00 0.80%", "text": "" }, { "heading": "4 1 200.00 0.61%", "text": "" }, { "heading": "3 5 200.00 2.48%", "text": "" }, { "heading": "3 4 200.00 1.42%", "text": "" }, { "heading": "3 3 200.00 0.87%", "text": "" }, { "heading": "3 2 200.00 0.95%", "text": "" }, { "heading": "3 1 200.00 0.93%", "text": "" }, { "heading": "2 5 200.00 2.31%", "text": "" }, { "heading": "2 4 200.00 1.34%", "text": "" }, { "heading": "2 3 200.00 1.14%", "text": "" }, { "heading": "2 2 200.00 0.98%", "text": "" }, { "heading": "2 1 200.00 0.91%", "text": "" }, { "heading": "4 -82.64 20.17%", "text": "" }, { "heading": "3 -83.40 19.68%", "text": "" }, { "heading": "2 -86.17 19.83%", "text": "" }, { "heading": "8 7 20.63 47.19%", "text": "" }, { "heading": "8 5 19.13 66.12%", "text": "" }, { "heading": "8 3 15.21 72.37%", "text": "" }, { "heading": "8 1 -0.23 73.31%", "text": "" }, { "heading": "4 8 20.73 54.91%", "text": "" }, { "heading": "4 6 18.04 63.66%", "text": "" }, { "heading": "4 4 15.13 71.10%", "text": "" }, { "heading": "4 2 4.88 75.01%", "text": "" }, { "heading": "2 9 19.51 56.33%", "text": "" }, { "heading": "2 7 16.80 68.04%", "text": "" }, { "heading": "2 5 7.64 74.29%", "text": "" }, { "heading": "2 3 4.60 77.23%", "text": "" }, { "heading": "8 7 20.62 53.52%", "text": "" }, { "heading": "8 5 17.27 64.35%", "text": "" }, { "heading": "8 3 14.84 73.77%", "text": "" }, { "heading": "8 1 6.05 76.48%", "text": "" }, { "heading": "4 8 20.65 56.72%", "text": "" }, { "heading": "4 6 18.01 65.56%", "text": "" }, { "heading": "4 4 14.70 70.27%", "text": "" }, { "heading": "4 2 2.95 72.88%", "text": "" }, { "heading": "2 9 20.70 56.78%", "text": "" }, { "heading": "2 7 20.24 64.32%", "text": "" }, { "heading": "2 5 15.20 70.44%", "text": "" }, { "heading": "2 3 0.09 74.09%", "text": "" }, { "heading": "8 4 -81.61 12.77%", "text": "" }, { "heading": "8 3 -83.27 15.39%", "text": "" }, { "heading": "8 2 -80.19 14.23%", "text": "" }, { "heading": "8 1 -79.92 14.89%", "text": "" }, { "heading": "7 5 -84.96 13.19%", "text": "" }, { "heading": "7 4 -81.84 13.89%", "text": "" }, { "heading": "7 3 -84.00 16.07%", "text": "" }, { "heading": "7 2 -78.58 14.74%", "text": "" }, { "heading": "7 1 -81.10 15.88%", "text": "" }, { "heading": "6 5 -84.45 13.75%", "text": "" }, { "heading": "6 4 -82.64 13.52%", "text": "" }, { "heading": "6 3 -84.07 16.23%", "text": "" }, { "heading": "6 2 -82.74 15.07%", "text": "" }, { "heading": "6 1 -81.22 17.49%", "text": "" }, { "heading": "5 5 -84.53 14.67%", "text": "" }, { "heading": "5 4 -85.50 15.53%", "text": "" }, { "heading": "5 3 -83.12 16.84%", "text": "" }, { "heading": "5 2 -83.40 17.75%", "text": "" }, { "heading": "5 1 -83.70 17.87%", "text": "" }, { "heading": "4 5 -82.86 14.28%", "text": "" }, { "heading": "4 4 -84.36 16.34%", "text": "" }, { "heading": "4 3 -84.28 16.91%", "text": "" }, { "heading": "4 2 -82.62 17.92%", "text": "" }, { "heading": "4 1 -81.68 17.90%", "text": "" }, { "heading": "3 5 -82.28 15.13%", "text": "" }, { "heading": "3 4 -86.89 17.93%", "text": "" }, { "heading": "3 3 -83.69 17.47%", "text": "" }, { "heading": "3 2 -82.34 19.79%", "text": "" }, { "heading": "3 1 -82.17 19.48%", "text": "" }, { "heading": "2 5 -82.22 15.49%", "text": "" }, { "heading": "2 4 -84.22 16.97%", "text": "" }, { "heading": "2 3 -85.61 19.45%", "text": "" }, { "heading": "2 2 -83.51 20.63%", "text": "" }, { "heading": "2 1 -82.47 20.50%", "text": "" }, { "heading": "4 -103.53 9.19%", "text": "" }, { "heading": "3 -109.82 24.12%", "text": "" }, { "heading": "2 -119.07 35.09%", "text": "" }, { "heading": "8 5 -84.50 13.17%", "text": "" }, { "heading": "8 4 -82.97 13.99%", "text": "" }, { "heading": "8 3 -82.39 15.31%", "text": "" }, { "heading": "8 2 -82.56 15.41%", "text": "" }, { "heading": "8 1 -79.96 14.47%", "text": "" }, { "heading": "7 5 -82.86 13.24%", "text": "" }, { "heading": "7 4 -81.71 12.82%", "text": "" }, { "heading": "7 3 -85.09 17.11%", "text": "" }, { "heading": "7 2 -82.47 15.50%", "text": "" }, { "heading": "7 1 -83.57 16.61%", "text": "" }, { "heading": "6 5 -82.11 13.81%", "text": "" }, { "heading": "6 4 -83.52 13.84%", "text": "" }, { "heading": "6 3 -85.06 15.77%", "text": "" }, { "heading": "6 2 -83.27 16.37%", "text": "" }, { "heading": "6 1 -81.51 16.88%", "text": "" }, { "heading": "5 5 -84.11 14.83%", "text": "" }, { "heading": "5 4 -82.42 14.58%", "text": "" }, { "heading": "5 3 -85.08 17.87%", "text": "" }, { "heading": "5 2 -84.37 18.84%", "text": "" }, { "heading": "5 1 -79.70 15.75%", "text": "" }, { "heading": "4 5 -84.64 15.66%", "text": "" }, { "heading": "4 4 -81.86 14.68%", "text": "" }, { "heading": "4 3 -81.92 15.88%", "text": "" }, { "heading": "4 2 -81.19 18.72%", "text": "" }, { "heading": "4 1 -80.68 19.35%", "text": "" }, { "heading": "3 5 -82.01 14.70%", "text": "" }, { "heading": "3 4 -83.92 17.50%", "text": "" }, { "heading": "3 3 -86.51 20.72%", "text": "" }, { "heading": "3 2 -82.71 19.77%", "text": "" }, { "heading": "3 1 -81.08 19.52%", "text": "" }, { "heading": "2 5 -85.60 18.11%", "text": "" }, { "heading": "2 4 -83.86 19.37%", "text": "" }, { "heading": "2 3 -82.01 19.42%", "text": "" }, { "heading": "2 2 -83.61 19.68%", "text": "" }, { "heading": "2 1 -81.70 19.18%", "text": "" }, { "heading": "8 4 -105.07 6.48%", "text": "" }, { "heading": "8 3 -102.06 7.00%", "text": "" }, { "heading": "8 2 -100.36 7.40%", "text": "" }, { "heading": "8 1 -102.15 13.32%", "text": "" }, { "heading": "7 5 -103.39 4.82%", "text": "" }, { "heading": "7 4 -104.44 7.21%", "text": "" }, { "heading": "7 3 -102.05 8.83%", "text": "" }, { "heading": "7 2 -100.79 7.69%", "text": "" }, { "heading": "7 1 -100.57 11.72%", "text": "" }, { "heading": "6 5 -104.60 6.92%", "text": "" }, { "heading": "6 4 -104.99 7.83%", "text": "" }, { "heading": "6 3 -101.45 7.50%", "text": "" }, { "heading": "6 2 -100.07 8.84%", "text": "" }, { "heading": "6 1 -101.01 10.73%", "text": "" }, { "heading": "5 5 -105.33 6.96%", "text": "" }, { "heading": "5 4 -103.93 8.36%", "text": "" }, { "heading": "5 3 -100.78 7.24%", "text": "" }, { "heading": "5 2 -100.50 6.63%", "text": "" }, { "heading": "5 1 -101.58 11.70%", "text": "" }, { "heading": "4 5 -103.49 6.39%", "text": "" }, { "heading": "4 4 -105.02 7.52%", "text": "" }, { "heading": "4 3 -100.65 7.39%", "text": "" }, { "heading": "4 2 -99.67 8.04%", "text": "" }, { "heading": "4 1 -101.87 13.03%", "text": "" }, { "heading": "3 5 -104.00 6.31%", "text": "" }, { "heading": "3 4 -103.95 8.91%", "text": "" }, { "heading": "3 3 -101.27 8.23%", "text": "" }, { "heading": "3 2 -100.51 7.95%", "text": "" }, { "heading": "3 1 -101.27 14.86%", "text": "" }, { "heading": "2 5 -104.60 6.92%", "text": "" }, { "heading": "2 4 -103.81 10.06%", "text": "" }, { "heading": "2 3 -100.35 7.41%", "text": "" }, { "heading": "2 2 -101.58 9.97%", "text": "" }, { "heading": "2 1 -105.53 21.35%", "text": "" }, { "heading": "8 4 -104.67 5.73%", "text": "" }, { "heading": "8 3 -101.41 6.69%", "text": "" }, { "heading": "8 2 -100.20 7.31%", "text": "" }, { "heading": "8 1 -103.09 12.71%", "text": "" }, { "heading": "7 5 -105.14 8.18%", "text": "" }, { "heading": "7 4 -102.81 5.27%", "text": "" }, { "heading": "7 3 -101.67 7.06%", "text": "" }, { "heading": "7 2 -100.52 7.84%", "text": "" }, { "heading": "7 1 -100.59 11.85%", "text": "" }, { "heading": "6 5 -104.40 6.05%", "text": "" }, { "heading": "6 4 -104.92 6.97%", "text": "" }, { "heading": "6 3 -100.68 7.14%", "text": "" }, { "heading": "6 2 -100.60 8.56%", "text": "" }, { "heading": "6 1 -100.68 9.62%", "text": "" }, { "heading": "5 5 -104.16 4.70%", "text": "" }, { "heading": "5 4 -104.00 5.94%", "text": "" }, { "heading": "5 3 -101.27 7.59%", "text": "" }, { "heading": "5 2 -100.42 7.57%", "text": "" }, { "heading": "5 1 -100.86 11.05%", "text": "" }, { "heading": "4 5 -104.21 5.65%", "text": "" }, { "heading": "4 4 -104.29 6.84%", "text": "" }, { "heading": "4 3 -101.17 7.89%", "text": "" }, { "heading": "4 2 -100.36 9.77%", "text": "" }, { "heading": "4 1 -102.06 11.65%", "text": "" }, { "heading": "3 5 -103.97 5.90%", "text": "" }, { "heading": "3 4 -102.71 7.61%", "text": "" }, { "heading": "3 3 -100.57 6.96%", "text": "" }, { "heading": "3 2 -100.79 8.10%", "text": "" }, { "heading": "3 1 -100.61 13.34%", "text": "" }, { "heading": "2 5 -104.40 7.10%", "text": "" }, { "heading": "2 4 -101.33 7.97%", "text": "" }, { "heading": "2 3 -100.36 7.08%", "text": "" }, { "heading": "2 2 -102.37 12.27%", "text": "" }, { "heading": "2 1 -107.15 22.85%", "text": "" } ]
2,019
null
SP:ddc70109c59cf0db7fe020300ab762a5ac57bd93
[ "This paper studies the internal representations of recurrent neural networks trained on navigation tasks. By varying the weight of different terms in an objective used for supervised pre-training, RNNs are created that either use path integration or landmark memory for navigation. The paper shows that the pretraining method leads to differential performance when the readout layer of these networks networks is trained using Q-learning on different variants of a navigation task. The main result of the paper is obtained by finding the slow points of the dynamics of the trained RNNs. The paper finds that the RNNs pre-trained to use path integration contain 2D continuous attractors, allowing position memory. On the other hand, the RNNs pre-trained for landmark memory contain discrete attractors corresponding to the different landmarks.", "This paper explores how pre-training a recurrent network on different navigational objectives confers different benefits when it comes to solving downstream tasks. First, networks are pretrained on an objective that either emphasizes position (path integration) or landmark memory (identity of the last wall encountered). This pretraining generates recurrent networks of two classes, called PosNets and MemNets (in addition to no pre-training, called RandNets). Surprisingly, the authors found that pre-training confers different benefits that manifests as differential performance of PosNets and MemNets across the suite. Some evidence is provided that this difference has to do with the requirements of the task. Moreover, the authors show how the different pretraining manifests as different dynamical structures (measured using fixed point analyses) present in the networks after pre-training. In particular, the PosNets contained a 2D plane attractor (used to readout position), whereas the MemNets contained clusters of fixed points (corresponding to the previously encountered landmark)." ]
Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network’s output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. By combining two types of networks in a modular structure, we could get better performance for both regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes – which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks, and their combination with metric representation leads to flexibile multiple-task learning.
[ { "affiliations": [], "name": "Tie Xu" }, { "affiliations": [], "name": "Omri Barak" } ]
[ { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Yoram Burak", "Ila R. Fiete" ], "title": "Accurate Path Integration in Continuous Attractor Network Models of Grid Cells", "venue": "PLoS Computational Biology, 5(2):e1000291,", "year": 2009 }, { "authors": [ "Christopher J. Cueva", "Xue-Xin Wei" ], "title": "Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. 3 2018", "venue": null, "year": 2018 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pierre-Antoine Manzagol", "Pascal Vincent", "Samy Bengio" ], "title": "Why does unsupervised pre-training help deep learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Adam Gaier", "David Ha" ], "title": "Weight agnostic neural networks", "venue": "arXiv preprint arXiv:1906.04358,", "year": 2019 }, { "authors": [ "Kiah Hardcastle", "Niru Maheswaranathan", "Surya Ganguli", "Lisa M. Giocomo" ], "title": "A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex", "venue": "doi: 10.1016/j.neuron.2017.03.025. URL http://dx.doi", "year": 2017 }, { "authors": [ "Nicolas Heess", "Jonathan J Hunt", "Timothy P Lillicrap", "David Silver" ], "title": "Memory-based control with recurrent neural networks", "venue": "arXiv preprint arXiv:1512.04455,", "year": 2015 }, { "authors": [ "Herbert Jaeger" ], "title": "The &quot;echo state&quot; approach to analysing and training recurrent neural networks-with an Erratum note 1", "venue": "Technical report,", "year": 2010 }, { "authors": [ "Kyobi S Kakaria", "Benjamin L de Bivort" ], "title": "Ring attractor dynamics emerge from a spiking model of the entire protocerebral bridge", "venue": "Frontiers in behavioral neuroscience,", "year": 2017 }, { "authors": [ "Ingmar Kanitscheider", "Ila Fiete" ], "title": "Making our way through the world: Towards a functional understanding of the brain’s spatial circuits", "venue": "Current Opinion in Systems Biology, 3:186–194,", "year": 2017 }, { "authors": [ "Garrett E Katz", "James A Reggia" ], "title": "Using directional fibers to locate fixed points of recurrent neural networks", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2017 }, { "authors": [ "Sung Soo Kim", "Herv Rouault", "Shaul Druckmann", "Vivek Jayaraman" ], "title": "Ring attractor dynamics in the Drosophila central brain", "venue": "Science (New York, N.Y.), 356(6340):849–853,", "year": 2017 }, { "authors": [ "Mantas Lukoševičius", "Herbert Jaeger" ], "title": "Reservoir computing approaches to recurrent neural network training", "venue": "Computer Science Review, 3(3):127–149,", "year": 2009 }, { "authors": [ "Niru Maheswaranathan", "Alex H Williams", "Matthew D Golub", "Surya Ganguli", "David Sussillo" ], "title": "Line attractor dynamics in recurrent networks for sentiment classification", "venue": null, "year": 2019 }, { "authors": [ "Valerio Mante", "David Sussillo", "Krishna V. Shenoy", "William T. Newsome" ], "title": "Context-dependent computation by recurrent dynamics in prefrontal cortex", "venue": "Nature, 503(7474):78–84,", "year": 2013 }, { "authors": [ "Francesca Mastrogiuseppe", "Srdjan Ostojic" ], "title": "Linking Connectivity, Dynamics, and Computations in Low-Rank", "venue": "Recurrent Neural Networks. Neuron,", "year": 2018 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J. Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Learning to Navigate in Complex Environments", "venue": "URL http://arxiv. org/abs/1611.03673", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Richard G.M. Morris" ], "title": "Spatial localization does not require the presence of local cues", "venue": "Learning and Motivation, 12(2):239–260,", "year": 1981 }, { "authors": [ "John. O’Keefe", "Lynn Nadel" ], "title": "The hippocampus as a cognitive map", "venue": "URL https://repository.arizona.edu/handle/10150/620894", "year": 1978 }, { "authors": [ "Kui-Hong Park", "Yong-Jae Kim", "Jong-Hwan Kim" ], "title": "Modular q-learning based multi-agent cooperation for robot soccer", "venue": "Robotics and Autonomous Systems,", "year": 2001 }, { "authors": [ "David Sussillo", "L.F. Abbott" ], "title": "Generating Coherent Patterns of Activity from Chaotic", "venue": "Neural Networks. Neuron,", "year": 2009 }, { "authors": [ "David Sussillo", "Omri Barak" ], "title": "Opening the Black Box: Low-dimensional dynamics in highdimensional recurrent neural networks. Technical report", "venue": "URL https://barak.net", "year": 2012 }, { "authors": [ "Sylvia Wirth", "Pierre Baraduc", "Aurlie Planté", "Serge Pinède", "Jean Ren Duhamel" ], "title": "Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation", "venue": "PLoS Biology,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Spatial navigation is an important task that requires a correct internal representation of the world, and thus its mechanistic underpinnings have attracted the attention of scientists for a long time (O’Keefe & Nadel, 1978). A standard tool for navigation is a euclidean map, and this naturally leads to the hypothesis that our internal model is such a map. Artificial navigation also relies on SLAM (Simultaneous localization and mapping) which is based on maps (Kanitscheider & Fiete, 2017a). On the other hand, both from an ecological view and from a pure machine learning perspective, navigation is firstly about reward acquisition, while exploiting the statistical regularities of the environment. Different tasks and environments lead to different statistical regularities. Thus it is unclear which internal representations are optimal for reward acquisition. We take a functional approach to this question by training recurrent neural networks for navigation tasks with various types of statistical regularities. Because we are interested in internal representations, we opt for a two-phase learning scheme instead of end-to-end learning. Inspired by the biological phenomena of evolution and development, we first pre-train the networks to emphasize several aspects of their internal representation. Following pre-training, we use Q-learning to modify the network’s readout weights for specific tasks while maintaining its internal connectivity.\nWe evaluate the performance for different networks on a battery of simple navigation tasks with different statistical regularities and show that the internal representations of the networks manifest in differential performance according to the nature of tasks. The link between task performance and network structure is understood by probing networks’ dynamics, exposing a low-dimensional\nmanifold of slow dynamics in phase space, which is clustered into three major categories: continuous attractor, discrete attractors, and unstructured chaotic dynamics. The different network attractors encode different priors, or inductive bias, for specific tasks which corresponds to metric or topology invariances in the tasks. By combining networks with different inductive biases we could build a modular system with improved multiple-task learning.\nOverall we offer a paradigm which shows how dynamics of recurrent networks implement different priors for environments. Pre-training, which is agnostic to specific tasks, could lead to dramatic difference in the network’s dynamical landscape and affect reinforcement learning of different navigation tasks." }, { "heading": "2 RELATED WORK", "text": "Several recent papers used a functional approach for navigation (Cueva & Wei, 2018; Kanitscheider & Fiete, 2017b; Banino et al., 2018). These works, however, consider the position as the desired output, by assuming that it is the relevant representation for navigation. These works successfully show that the recurrent network agent could solve the neural SLAM problem and that this could result in units of the network exhibiting similar response profiles to those found in neurophysiological experiments (place and grid cells). In our case, the desired behavior was to obtain the reward, and not to report the current position.\nAnother recent approach did define reward acquisition as the goal, by applying deep RL directly to navigation problems in an end to end manner (Mirowski et al., 2016). The navigation tasks relied on rich visual cues, that allowed evaluation in a state of the art setting. This richness, however, can hinder the greater mechanistic insights that can be obtained from the systematic analysis of toy problems – and accordingly, the focus of these works is on performance.\nOur work is also related to recent works in neuroscience that highlight the richness of neural representations for navigation, beyond Euclidian spatial maps (Hardcastle et al., 2017; Wirth et al., 2017).\nOur pre-training is similar to unsupervised, followed by supervised training (Erhan et al., 2010). In the past few years, end-to-end learning is a more dominant approach (Graves et al., 2014; Mnih et al., 2013) . We highlight the ability of a pre-training framework to manipulate network dynamics and the resulting internal representations and study their effect as inductive bias." }, { "heading": "3 RESULTS", "text": "" }, { "heading": "3.1 TASK DEFINITION", "text": "Navigation can be described as taking advantage of spatial regularities of the environment to achieve goals. This view naturally leads to considering a cognitive map as an internal model of the environment, but leaves open the question of precisely which type of map is to be expected. To answer this question, we systematically study both a space of networks – emphasizing different internal models – and a space of tasks – emphasizing different spatial regularities. To allow a systematic approach, we design a toy navigation problem, inspired by the Morris water maze (Morris, 1981). An agent is placed in a random position in a discretized square arena (size 15), and has to locate the reward location (yellow square, Fig 1A), while only receiving input (empty/wall/reward) from the 8 neighboring positions. The reward is placed in one of two possible locations in the room according to an external context signal, and the agent can move in one of the four cardinal directions. At every trial, the agent is placed in a random position in the arena, and the network’s internal state is randomly initialized as well. The platform location is constant across trials for each context (see Methods). The agent is controlled by a RNN that receives the proximal sensory input, as well as a feedback of its own chosen action (Fig. 1B). The network’s output is a value for each of the 4 possible actions, the maximum of which is chosen to update the agent’s position. We use a vanilla RNN (see Appendix for LSTM units) described by:\nht+1 =\n( 1− 1\nτ\n) ht + 1\nτ tanh (Wht +Wif(zt) +WaAt +WcCt) (1)\nQ(ht) = Woht + bo (2)\nwhere ht is the activity of neurons in the networks(512 neurons as default),W is connectivity matrix, τ is a timescale of update. The sensory input f(zt) is fed through connections matrixWs , and action feedback is fed throughWa . The context signal Ct is fed through matrix Wc . The network outputs a Q function, which is computed by a linear transformation of its hidden state.\nBeyond the basic setting (Fig. 1A), we design several variants of the task to emphasize different statistical regularities (Fig. 1C). In all cases, the agent begins from a random position and has to reach the context-dependent reward location in the shortest time using only proximal input. The ”Hole” variant introduces a random placement of obstacles (different numbers and positions) in each trial. The ”Bar” variant introduces a horizontal or vertical bar in random positions in each trial. The various ”Scale” tasks stretch the arena in the horizontal or vertical direction while maintaining the relative position of the rewards. The ”Implicit context” task is similar to the basic setting, but the external context input is eliminated, and instead, the color of the walls indicates the reward position. For all these tasks, the agent needs to find a strategy that tackles the uncertain elements to achieve the goals. Despite the simple setting of the game, the tasks are not trivial due to identical visual inputs in most of the locations and various uncertain elements adding to the task difficulty." }, { "heading": "3.2 TRAINING FRAMEWORK", "text": "We aim to understand the interaction between internal representation and the statistical regularities of the various tasks. In principle, this could be accomplished by end-to-end reinforcement learning of many tasks, using various hyper-parameters to allow different solutions to the same task. We opted for a different approach - both due to computation efficiency (see Appendix III) and due to biological motivations. A biological agent acquires navigation ability during evolution and development, which shapes its elementary cognitive ability such as spatial or object memory. This shaping provides a scaffold upon which the animal could adapt and learn quickly to perform diverse tasks during life. Similarly, we divide learning into two phases, a pre-training phase that is task agnostic and a Q learning phase that is task-specific (Fig. 2A). During pre-training we modify the network’s internal and input connectivity, while Q learning only modifies the output.\nPre-training is implemented in an environment similar to the basic task, with an arena size chosen randomly between 10 to 20. The agent’s actions are externally determined as a correlated random\nwalk, instead of being internally generated by the agent. Inspired by neurophysiological findings, we emphasize two different aspects of internal representation - landmark memory (Identity of the last encountered wall) and position encoding (O’Keefe & Nadel, 1978). We thus pre-train the internal connectivity to generate an ensemble of networks with various hyperparameters that control the relative importance of these two aspects, as well as which parts of the connectivityW,Wa,Wiare modified. We term networks emphasizing the two aspects respectively MemNet and PosNet, and call the naive random network RandNet (Fig. 2A). This is done by stochastic gradient descent on the following objective function:\nS = −α n∑\ni=1\nP̂ (zt)logP (zt)− β n∑\ni=1\nÎtlogP (It)− γ n∑\ni=1\nÂtlogP (At) (3)\nwith z = (x, y) for position, I for landmark memory (identity of the last wall encountered), A for action. The term on action serves as a regularizer. The three probability distributions are estimated from hidden states of the RNN, given by:\nP (I|ht) = exp(Wmht + bm)∑ m(exp(Wmht + bm))\n(4)\nP (A|ht−1, ht) = exp(Wa[ht−1, ht] + ba)∑ a exp(Wa[ht−1, ht] + ba)\n(5)\nP (z|ht) = exp((z − (Wpht + bp))2/σ2)∑ z exp((z − (Wpht + bp))2/σ2)\n(6)\nwhere Wm, Wp, Wa are readout matrices from hidden states and [ht−1, ht] denotes the concatenation of last and current hidden states. Tables 1,2,3 in the Appendix show the hyperparameter choices for all networks. The ratio between α and β controls the tradeoff between position and memory. The exact values of the hyperparameters were found through trial and error.\nHaving obtained this ensemble of networks, we use a Q-learning algorithm with TD-lambda update for the network’s outputs, which are Q values. We utilize the fact that only the readout matrix Wo is trained to use a recursive least square method which allows a fast update of weights for different tasks (Sussillo & Abbott, 2009). This choice leads to a much better convergence speed when compared to stochastic gradient descent. The update rule used is:\nWo(n+ 1) = Wo(n)− e(n)P (n)H(n)T (7) P (n+ 1) = (C(n+ 1) + αI)−1 (8)\nC(n+ 1) = λC(n) +H(n)TH(n) (9) e(n) = WoH(n)− Y (n) (10)\nwhereH is a matrix of hidden states over 120 time steps, αI is a regularizer and λ controls forgetting rate of past data.\nWe then analyze the test performance of all networks on all tasks (Figure 2B and Table 3 in appendix). Figure 2B,C show that there are correlations between different tasks and between different networks. We quantify this correlation structure by performing principal component analysis of the performance matrix. We find that the first two PCs in task space explain 79% of the variance. The first component corresponds to the difficulty (average performance) of each task, while the coefficients of the second component are informative regarding the nature of the tasks (Fig. 2B, right): Bar (-0.49), Hole(-0.25), Basic(-0.21), Implicit context (-0.12), ScaleX (0.04), ScaleY (0.31), Scale (0.74). We speculate these numbers characterize the importance of two different invariances inherent in the tasks. Negative coefficients correspond to metric invariance. For example, when overcoming dynamic obstacles, the position remains invariant. This type of task was fundamental to establish metric cognitive maps in neuroscience (O’Keefe & Nadel, 1978). Positive coefficients correspond to topological invariance, defined as the relation between landmarks unaffected by the metric information.\nObserving the behavior of networks for the extreme tasks of this axis indeed confirms the speculation. Fig. 3A shows that the successful agent overcomes the scaling task by finding a set of actions\nthat captures the relations between landmarks and reward, thus generalizing to larger size arenas. Fig3B shows that the successful agent in the bar task uses a very different strategy. An agent that captures the metric invariance could adjust trajectories and reach the reward each time when the obstacle is changed. This ability is often related to the ability to use shortcuts (O’Keefe & Nadel, 1978). The other tasks intepolate between the two extremes, due to the presence of both elements in the tasks. For instance, the implicit context task requires the agent to combine landmark memory (color of the wall) with position to locate the reward.\nWe thus define metric and topological scores by using a weighted average of task performance using negative and positive coefficients respectively. Fig. 3C shows the various networks measured by the two scores. We see that random networks (blue) can achieve reasonable performance with some\nhyperparameter choices, but they are balanced with respect to the metric topological score. On the other hand, PostNet networks are pushed to the metric side and MemNet networks to the topological side. This result indicates that the inductive bias achieved via task agnostic pre-training is manifested in the performance of networks on various navigation tasks." }, { "heading": "3.3 LINKING REPRESENTATION TO DYNAMICS", "text": "What are the underlying structures of different networks that encode the bias for different tasks? We approach this question by noting that RNNs are nonlinear dynamical systems. As such, it is informative to detect fixed points and other special areas of phase space to better understand their dynamics. For instance, a network that memorizes a binary value might be expected to contain two discrete fixed points (Fig. 4A). A network that integrates a continuous value might contain a line attractorKim et al. (2017); Kakaria & de Bivort (2017), and , a network that integrates position might contain a plane attractor – a 2D manifold of fixed points – because this would enable updating x and y coordinates with actions, and maintaining the current position in the absence of action (Burak & Fiete, 2009). Trained networks, however, often converge to approximate fixed points (slow points) (Sussillo & Barak; Mante et al., 2013; Maheswaranathan et al., 2019), as they are not required to maintain the same position for an infinite time. We thus expect the relevant slow points to be somewhere between the actual trajectories and true fixed points. We detect these areas of phase space using adaptations of existing techniques (Appendix 5.3, (Sussillo & Barak)). Briefly, we drive the agent to move in the environment, while recording its position and last seen landmark (wall). This results in a collection of hidden states. Then, for each point in this collection, we relax the dynamics towards approximate fixed points. This procedure results in points with different hidden state velocities for the three networks (Fig. 4B) – RandNet does not seem to have any slow points, while PosNet and MemNet do, with MemNet’s points being slower. The resulting manifold of slow points for a typical PosNet is depicted in Figure 4C, along with the labels of position and stimulus from which relaxation began. It is apparent that pretraining has created in PosNet a smooth representation of position along this manifold. The MemNet manifold represents landmark memory as 4 distinct fixed points without a spatial representation. Note that despite the dominance of position representation in PosNet, landmark memory still modulates this representation (Fig 3A, M) - showing that pretraining did not result in a perfect plane attractor, but rather in an approximate collection of 4 plane attractors (Fig. 3D, MP). This conjunctive representation can also be appreciated by considering the decoding accuracy of trajectories conditioned on the number of wall encounters. As the agent encounters the wall, the decoding of position from the manifold improves, implying the ability to integrate path integration and landmark memory (Fig Appendix 8).\nWe thus see that the pre-training biases are implemented by distinct attractor landscapes, from which we could see both qualitative differences between networks and a trade-off between landmark memory and position encoding. The continuous attractors of PosNet correspond to a metric representation of space, albeit modulated by landmark memory. The discrete attractors of MemNet encode the landmark memory in a robust manner, while sacrificing position encoding. The untrained RandNet, on the other hand, has no clear structure, and relies on a short transient memory of the last landmark.\nThe above analysis was performed on three typical networks and is somewhat time-consuming. In order to get a broader view of internal representations in all networks, we use a simple measure of the components of the representation. Specifically, we drove the agent to move in an arena of infinite\nsize that was empty except a single wall (of a different identity in each trial). We then used GLM (generalized linear model) to determine the variance explained by both position and the identity of the wall encountered from the network’s hidden state. Figure 6A shows these two measures for all the networks. The results echo those measured with the battery of 7 tasks (Fig. 3C), but are orders of magnitude faster to compute. Indeed, if we correlate these measures with performance on the different tasks, we see that they correspond to the metric-topological axis as defined by PCA (Fig. 5B, compare with Fig. 2B, right)." }, { "heading": "3.4 A MODULAR SYSTEM THAT COMBINES ADVANTAGES OF BOTH DYNAMICS", "text": "Altogether, we showed a differential ability of networks to cope with different environmental regularities via inductive bias encoded in their dynamics. Considering the tradeoff is a fundamental property of single module RNN, it is natural to ask if we could combine advantages from both dynamics into a modular system. Inspired by Park et al. (2001). We design a hierarchical system composed of the representation layer on the bottom and selector layer on top. The representation layer concatenates PosNet and MemNet modules together, each evaluating action values according to its own dynamics. The second layer selects the more reliable module based on the combined representation by assigning a value (reliability) to each module. The module with maximal reliability makes the final decision. Thus, the control of the agent shifts between different dynamics according to current input or history. The modular system significantly shifts the metric-topological balance (Fig2C, Fig3B). The reliability V is learned similarly as Q(Appendix 5.7).\nh1t+1 =\n( 1− 1\nτ\n) h1t + 1\nτ tanh\n( Wposh 1 t +W 1 i f(zt) +W 1 aAt +W 1 c Ct ) (11)\nh2t+1 =\n( 1− 1\nτ\n) h2t + 1\nτ tanh\n( Wmemh 2 t +W 2 i f(zt) +W 2 aAt +W 2 c Ct ) (12)\nQ1(h1t ) = W 1 o h 1 t + b 1 o (13) Q2(h1t ) = W 2 o h 1 t + b 2 o (14)\nV (h1t , h 2 t ) = Wsel([h 1 t , h 2 t ]) + bsel (15)" }, { "heading": "4 DISCUSSION", "text": "Our work explores how internal representations for navigation tasks are implemented by the dynamics of recurrent neural networks. We show that pre-training networks in a task-agnostic manner can shape their dynamics into discrete fixed points or into a low-D manifold of slow points. These\ndistinct dynamical objects correspond to landmark memory and spatial memory respectively. When performing Q learning for specific tasks, these dynamical objects serve as priors for the network’s representations and shift its performance on the various navigation tasks. Here we show that both plane attractors and discrete attractors are useful. It would be interesting to see whether and how other dynamical objects can serve as inductive biases for other domains. In tasks outside of reinforcement learning, for instance, line attractors were shown to underlie network computations (Mante et al., 2013; Maheswaranathan et al., 2019).\nAn agent that has to perform several navigation tasks will require both types of representations. A single recurrent network, however, has a trade-off between adapting to one type of task or to another. The attractor landscape picture provides a possible dynamical reason for the tradeoff. Position requires a continuous attractor, whereas stimulus memory requires discrete attractors. While it is possible to have four separated plane attractors, it is perhaps easier for learning to converge to one or the other. A different solution to learn multiple tasks is by considering multiple modules, each optimized for a different dynamical regime. We showed that such a modular system is able to learn multiple tasks, in a manner that is more flexible than any single-module network we could train.\nPre-training alters network connectivity. The resulting connectivity is expected to be between random networks (Lukoševičius & Jaeger, 2009) and designed ones (Burak & Fiete, 2009). It is perhaps surprising that even the untrained RandNet can perform some of the navigation tasks using only Qlearning of the readout (with appropriate hyperparameters, see Tables 2,3 and section 4 ”Linking dynamics to connectivity” in Appendix). This is consistent with recent work showing that some architectures can perform various tasks without learning (Gaier & Ha, 2019). Studying the connectivity changes due to pre-training may help understand the statistics from which to draw better random networks (Appendix section 4).\nApart from improving the understanding of representation and dynamics, it is interesting to consider the efficiency of our two-stage learning compared to standard approaches. We found that end-toend training is much slower, cannot learn topological tasks and has weaker transfer between tasks (See Appendix section 5.2). Thus it is interesting to explore whether this approach could be used to accelerate learning in other domains, similar to curriculum learning (Bengio et al., 2009)." }, { "heading": "ACKNOWLEDGMENTS", "text": "OB is supported by the Israeli Science Foundation (346/16) and by a Rappaport Institute Thematic grant. TX is supported by Key scientific technological innovation research project by Chinese Ministry of Education and Tsinghua University Initiative Scientific Research Program for computational resources." }, { "heading": "5 APPENDIX", "text": "" }, { "heading": "5.1 PERFORMANCE MEASURE FOR EACH TASK", "text": "When testing the agent on a task, we perform a trial for each possible initial position of the agent. Note that the hidden state is randomly initialized in each trial, so this is not an exhaustive search of all possible trial types. We then measure the time it takes the agent to reach the target. This time is normalized by an approximate optimal strategy – moving from the initial position to a corner of the arena (providing x and y information), and then heading straight to the reward. If the agent fails to reach the target after 120 steps, the trial score is −1:\nScore = { T/Topt T < Tmax −1 T > Tmax (16)" }, { "heading": "5.2 TWO-STAGE VERSUS END-TO-END LEARNING", "text": "To check the effectiveness of our two-stage learning (pre-training followed by Q-Learning), we contrast it with an end-to-end approach. We considered two approached to training: a classic deep Q learning version Mnih et al. (2013) and a method adapted from RDPG Heess et al. (2015). The naive Q learning is separated into a play phase and training phase. During the play phase the agent collects experience as st, at, ht rt into a replay buffer. The action value Q is computed through TD lambda method as Q(h, a) as target function. During the training phase the agent samples the past experience through replay buffer perform gradient descent to minimize the difference between expected Q(h, a) and target Q. We found that for most tasks the deep Q learning method was better than the adapted RDPG, and thus used it for our benchmarks. We used both LSTM and vanilla RNN for these test.\nWe found that, for the basic task, all networks achieve optimal performance, but our approach is significantly more data-efficient even with random networks (fig. 6A). For all topological tasks, the end-to-end approach fails with both vanilla RNN and LSTM (fig 6C table 3). The end to end approach performs relatively well in metric tasks, except for the implicit context task (fig 6B table 3), which converges to a similar performance as PosNet but with a much slower convergence speed (105 trials vs 104 trials).\nFor the end-to-end approaches, a critical question is whether an internal representation emerges, which enables better performance in similar tasks. For instance, do networks that were successfully end-to-end trained in the basic task develop a representation that facilitates learning the bar task? To answer this question, we use networks that were end-to-end trained on one task and use them as a basis for RLS Q-learning of a different task. This allows comparison with the pre-trained networks. Figure 7 shows that pre-training provides a better substrate for subsequent Q-learning - even when considering generalization within metric tasks. For the implicit context task, the difference is even greater." }, { "heading": "5.3 EXPLORING THE LOW D NETWORK DYNAMICS", "text": "Recurrent neural networks are nonlinear dynamical systems. As such, they behave differently in different areas of phase space. It is often informative to locate fixed points of the dynamics, and use their local dynamics as anchors to understand global dynamics. When considering trained RNNs, it is reasonable to expect approximate fixed points rather than exact ones. This is because a fixed point corresponds to maintaining the same hidden state for infinite time, whereas a trained network is only exposed to a finite time. These slow points (Sussillo & Barak; Mante et al., 2013) can be detected in several manners (Sussillo & Barak; Katz & Reggia, 2017). For the case of stable fixed points (attractors), it is also possible to simulate the dynamics until convergence. In our setting, we opt for the latter option. Because the agent never stays in the same place, we relax the dynamics towards attractors by providing as action feedback the average of all 4 actions. The relevant manifold (e.g. a plane attractor) might contain areas that are more stable than others (for instance a few true fixed points), but we want to avoid detecting only these areas. We thus search for the relevant manifold in the following manner. We drive the agent to move in the environment, while recording its position and last seen stimulus (wall). This results in a collection of hidden states, labelled by position\nand stimulus that we term the m = 0 manifold. For each point on the manifold, we continue simulating the dynamics for m extra steps while providing as input the average of all 4 actions, resulting in further m 6= 0 manifolds. If these states are the underlying scaffold for the dynamics, they should encode the position (or memory). We therefore choose m by a cross-validation method – decoding new trajectories obtained in the basic task by using the k = 15-nearest neighbors in each m-manifold. The red curve in Figure 8A shows the resulting decoding accuracy for position using PosNet, where the accuracy starts to fall around m = 25, indicating that further relaxation leads to irrelevant fixed points." }, { "heading": "5.4 ATTRACTOR LANDSCAPE FOR DIFFERENT RECURRENT ARCHITECTURES", "text": "We tested the qualitative shape of the slow manifolds that emerge from pre-training other unit types. Specifically, we pre-trained an LSTM network using the same parameters as PosNet1 and MemNet1 (Table 1). Figures 9 show a qualitatively similar behvior to that described in the main text. Note that MemNet has slow regions instead of discrete points, and we suspect discrete attractors might appear with longer relaxation times. The differences between the columns also demonstrate that slow points, revealed by relaxation, are generally helpful to analyze dynamics of different types of recurrent networks." }, { "heading": "5.5 PRE-TRAINING PROTOCOLS AND PERFORMANCE OF NETWORKS", "text": "As explained in the main text, pre-training emphasizes decoding of either landmark memory or the position of the agent. We used several variants of hyperparameters to pre-train the networks. Equation 17, which is written again for convenience, defines the relevant parameters:\nS = −α n∑\ni=1\nP̂ (zt)logP (zt)− β n∑\ni=1\nÎtlogP (It)− γ n∑\ni=1\nÂtlogP (At) (17)\nThe agent was driven to explore an empty arena (with walls) using random actions, with a probability p of changing action (direction) at any step. Table 1 shows the protocols (hyperparameters), Table\n2 shows the random networks hyperparameters, and table 3 shows the performance of the resulting networks on all tasks. For all pre-training protocols an l2 regularizer of 10−6 on internal weights\nwas used, and a learning rate of 10−5. All PosNet and MemNet training started from RandNet1 (detailed below).\nDifferent hyper parameter for RandNets:\nht+1 = (1− 1\nτ )ht +\n1 τ (tanh(Wht +Wif(zt) +WaAt +WcCt) (18)\nQ(ht) = Woht + bo (19)\nNumber of neurons used 512, time constant τ is taken to be 2, the choice of hyper parameters is according to standard reservoir computing litterature (Jaeger, 2010; Lukoševičius & Jaeger). The weights are taken from a standard Normal distribution. It is crucial to choose an appropriate standard deviation for success of training (Appendix section 5), which is summarized in 2, each unit represents 1/ √ N" }, { "heading": "5.6 MODULAR NETWORK PROTOCOL", "text": "The results of modular network are obtained from combining the PosNet 1 and MemNet 25 from table . Both the learning of Q function and V function is learned in the the same way as main results equ (7,8,9, 10)." }, { "heading": "5.7 LINKING DYNAMICS TO CONNECTIVITY", "text": "Pretraining modified the internal connectivity of the networks. Here, we explore the link between connectivity and and dynamics. We draw inspiration from two observations in the field of reservoir computing (Lukoševičius & Jaeger, 2009). On the one hand, the norm of the internal connectivity has a large effect on network dynamics and performance, with an advantage to residing on the edge of chaos (Jaeger, 2010). On the other hand, restricting learning to the readout weights (which is then fed back to the network, Sussillo & Abbott (2009)) results in a low-rank perturbation to the connectivity, the possible contributions of which were recently explored (Mastrogiuseppe & Ostojic, 2018).\nWe thus analyzed both aspects. Fig. 10A shows the norms of several matrices as they evolve through pre-training, showing an opposite trend for PosNet and MemNet with respect to the internal\nconnectivity W . To estimate the low-rank component, we performed singular value decomposition on the change to the internal connectivity induced by pre-training (Fig. 10B).\nW = W0 + USV T (20)\nThe singular values of the actual change were compared to a shuffled version, revealing their lowrank structure (Fig. 10C,D). Note that pretraining was not constrained to generate such a lowrank perturbation. Furthermore, we show that the low-rank structure is partially correlated to the network’s inputs, possibly contributing to their effective amplification through the dynamics (Fig. 10E-H). Because we detected both types of connectivity changes (norm and low-rank), we next sought to characterize their relative contributions to network dynamics, representation and behavior.\nIn order to assess the effect of matrix norms, we generated a large number of scaled random matrices and use the GLM analyse in figure 5 to access its influence on dynamics. We see the trade-off between landmark memory and path integration is affected by norm (Fig. 10E). But the actual numbers, however, are much lower for the scaled random matrices compared to the pretrained ones – indicating the importance of the low-rank component (Fig. 10F) . Indeed, when removing even only the leading 5 ranks from ∆W , Network encoding and performance on all tasks approaches that of RandNet." }, { "heading": "5.8 BEHAVIOR OF DIFFERENT NETWORKS", "text": "Different networks develop diverse strategies for metric and topological tasks. In this section, we give examples of typical trajectories of PosNet, MemNet, RanNet in the basic, bar and scaling tasks." } ]
2,020
IMPLEMENTING INDUCTIVE BIAS FOR DIFFERENT NAVIGATION TASKS THROUGH DIVERSE RNN ATTR- RACTORS
SP:faca1e6eda4ad3b91ab99995e420398c01cc0e42
[ "This paper presents a computational model of motivation for Q learning and relates it to biological models of motivation. Motivation is presented to the agent as a component of its inputs, and is encoded in a vectorised reward function where each component of the reward is weighted. This approach is explored in three domains: a modified four-room domain where each room represents a different reward in the reward vector, a route planning problem, and a pavlovian conditioning example where neuronal activations are compared to mice undergoing a similar conditioning.", "The authors investigate mechanisms underlying action selection in artificial agents and mice. To achieve this goal, they use RL to train neural networks to choose actions that maximize their temporally discounted sum of future rewards. Importantly, these rewards depend on a motivation factor that is itself a function of time and action; this motivation factor is the key difference between the authors' approach and \"vanilla\" RL. In simple tasks, the RL agent learns effective strategies (i.e., migrating between rooms in Fig. 1, and minimizing path lengths for the vehicle routing problem in Fig. 5). " ]
How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "P.L. Bacon", "D. Precup" ], "title": "Constructing temporal abstractions autonomously in reinforcement learning", "venue": "AI Magazine,", "year": 2018 }, { "authors": [ "K.C. Berridge" ], "title": "From prediction error to incentive salience: mesolimbic computation of reward motivation", "venue": "Eur J Neurosci,", "year": 2012 }, { "authors": [ "K.C. Berridge", "J. Schulkin" ], "title": "Palatability shift of a salt-associated incentive during sodium depletion", "venue": "Q J Exp Psychol B,", "year": 1989 }, { "authors": [ "J.H.G.B. Dantzig" ], "title": "Ramser. The truck dispatching problem", "venue": "Management science,", "year": 1959 }, { "authors": [ "Anthony Dickinson", "Bernard Balleine" ], "title": "The role of learning in the operation of motivational systems", "venue": "Stevens’ handbook of experimental psychology,", "year": 2002 }, { "authors": [ "Howard Eichenbaum", "Paul Dudchenko", "Emma Wood", "Matthew Shapiro", "Heikki Tanila" ], "title": "The hippocampus, memory, and place cells: is it spatial memory or a memory space? Neuron", "venue": null, "year": 1999 }, { "authors": [ "E.S. Her", "N. Huh", "J. Kim", "M.W. Jung" ], "title": "Neuronal activity in dorsomedial and dorsolateral striatum under the requirement for temporal credit assignment", "venue": "Sci Rep,", "year": 2016 }, { "authors": [ "William Hodos" ], "title": "Progressive ratio as a measure of reward", "venue": "strength. Science,", "year": 1961 }, { "authors": [ "Mehdi Keramati", "Boris Gutkin" ], "title": "Homeostatic reinforcement learning for integrating reward collection and physiological stability", "venue": "Elife, 3:e04811,", "year": 2014 }, { "authors": [ "C.K. Machens", "R. Romo", "C.D. Brody" ], "title": "Flexible control of mutual inhibition: a neural model of two-interval discrimination", "venue": "Science, 307:1121–1124,", "year": 2005 }, { "authors": [ "James G Mansfield", "Christopher L Cunningham" ], "title": "Conditioning and extinction of tolerance to the hypothermic effect of ethanol in rats", "venue": "Journal of Comparative and Physiological Psychology,", "year": 1980 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G. Ostrovski", "S. Petersen" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518:529–533,", "year": 2015 }, { "authors": [ "J.M. Richard", "F. Ambroggi", "P.H. Janak", "H.L. Fields" ], "title": "Ventral pallidum neurons encode incentive value and promote cue-elicited instrumental", "venue": "actions. Neuron,", "year": 2016 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "A Schwartz" ], "title": "A reinforcement learning method for maximizing undiscounted rewards", "venue": "In Proceedings of the Tenth International Conference on Machine Learning (ICML", "year": 1993 }, { "authors": [ "I. Sinakevitch", "G.R. Bjorklund", "J.M. Newbern", "R.C. Gerkin", "B.H. Smith" ], "title": "Comparative study of chemical neuroanatomy of the olfactory neuropil in mouse, honey bee, and human", "venue": "Biol Cybern,", "year": 2018 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "A temporal-difference model of classical conditioning", "venue": "In Proceedings of the ninth annual conference of the cognitive science society,", "year": 1987 }, { "authors": [ "R.S. Sutton" ], "title": "The bitter lesson", "venue": null, "year": 2019 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: an introduction", "venue": null, "year": 1998 }, { "authors": [ "R.S. Sutton", "D. Precup", "S. Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "K.F. Wong", "A.C. Huk", "M.N. Shadlen", "X.J. Wang" ], "title": "Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making", "venue": "Front Comput Neurosci,", "year": 2007 }, { "authors": [ "J. Zhang", "K.C. Berridge", "A.J. Tindell", "K.S. Smith", "J.W. Aldridge" ], "title": "A neural computational model of incentive salience", "venue": "PLoS Comput Biol,", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Motivation is a cognitive process that propels an individual’s behavior towards or away from a particular object, perceived event, or outcome (Zhang et al., 2009). Mathematically, motivation can be viewed as subjective modulation of the perceived reward value before the reward is received. Therefore, it reflects an organism’s wanting of the reward before the outcome is actually achieved.\nComputational models for motivated behavior, which are best represented by reinforcement learning (RL) models, are mostly concerned with the learning aspect of behavior. However, fluctuations in physiological states, such as confidence and motivation, can also profoundly affect behavior (Zhang et al., 2009). Modeling such factors is thus an important goal in computational neuroscience and is in the early stages of mathematical description (Berridge, 2012).\nHere we build a neural network theory for motivational modulation of behavior based on Q-learning and apply this theory to mice performing Pavlovian conditioning task in which experimental observations of neural responses obtained in the ventral pallidum (VP) are available. We show that our motivated RL model both learns to correctly predict motivation-dependent rewards in the Pavlovian conditioning task and is consistent with responses of neurons in the VP. In particular, we show that, similarly to the VP neurons, Q-learning neural networks contain two oppositely-tuned populations of neurons responsive to positive and negative rewards. In the model, these two populations form a push-pull network that helps maintain motivation-dependent variables when inputs are missing. Our RL-based model is both consistent with experimental data and predicts the structure of the VP networks. We thus argue that motivation leads to complex behaviors which may add an extra level of complexity to machine learning approaches and is consistent with biological data." }, { "heading": "2 RESULTS", "text": "Motivation is defined mathematically as a need-dependent modulation of the perceived reward value depending on animal’s extrinsic or intrinsic conditions (Zhang et al., 2009). Thus, rats, which are normally repelled by high levels of salt in their food, may become attracted to a salt-containing solution following salt-free diet (Berridge, 2012). To model this observation, Berridge & Schulkin\n(1989) have proposed that the perceived reward rt received at time t is not absolute, but is modulated by an internal variable reflecting the level of motivation, which we will call here µ. The perceived level of the reward r̃t as a function of motivation µ can be expressed by the following equation:\nr̃t = r̃(rt, µ) (1) In the simplest example, the reward, associated with salt is given by r̃t = µrt. Baseline motivation towards salt can be defined by µ = −1, leading to the perceived reward of r̃t = −rt < 0. Thus, normally the presence of salt in the diet is undesired. In the salt-free condition, the motivation changes to µ = +1, leading to the subjective reward of r̃t = +rt ≥ 0. Thus salt-containing diet becomes attractive. In reality, the function r̃(...) defining the impact of motivation on a perceived reward is complex (Zhang et al., 2009), including the dependence on multiple factors described by a motivation vector ~µ. Individual components of this vector describe various needs experienced by the organism, such as thirst (e.g. µ1), appetite (µ2), etc. In this study, we explore the computational impact of motivation vector in the context of RL and investigate the brain circuits that might implement these computations.\nOur approach to motivation is based on Q-learning (Watkins & Dayan, 1992), which relies on an agent estimating Q-function, defined as the sum of future rewards given an action at chosen in a state ~st at time point t: Q(~st, at) = ∑∞ τ=0 r(~st+τ |at)γτ (here and below, we omit averaging for simplicity). Here 0 < γ ≤ 1 is the discounting factor that keeps the sum from diverging, and balances preference of short- versus long-term rewards. If a correct Q-function is known, a rational agent picks an action that maximizes future rewards: at ← argmaxaQ(~st, a). In case of motivation in equation 1, as reward values are affected by the motivation vector ~µ, for the Q-function, we obtain:\nQ(~st, at, ~µ) = ∞∑ τ=0 r̃(~st+τ , ~µt+τ |at)γτ (2)\nHere r̃(~st+τ , ~µt+τ |at) is the motivation ~µ-dependent perceived reward obtained in a state ~st+τ reached at time t+ τ given action at chosen at time t.\nThe state of the agent ~st and its motivation ~µ are distinct. The motivation is a slowly changing variable, that on average is not affected substantially by a single action. For example, the animal’s appetite does not change substantially during a single trial. At the same time, the actions selected by the animal lead to immediate changes of the animal’s state ~st. Recent research in neuroscience suggests that motivation and state may be represented and computed separately in the mammalian brain. Whereas motivation is usually attributed to the regions of the reward system, such as the VP (Berridge & Schulkin, 1989; Berridge, 2012), the state is likely to be computed elsewhere, e.g. in the hippocampus (Eichenbaum et al., 1999), or cortex. In RL, an agent’s state and motivation may have different mathematical representations. In the examples below, the state variable is given by a one-hot vector, while motivation is represented by a full vector. Two arguments of the Q-function, ~st and ~µ, are therefore distinct. Finally, in hierarchical RL implementation, motivation is provided by a higher level network, while information about the state is generated externally.\nAlthough the Q-function with motivation (equation 2) is similar to the Q-function in goalconditioned RL (Schaul et al., 2015; Andrychowicz et al., 2017), the underlying learning dynamics is different. Motivated behavior pursues multiple distributed sources of dynamic rewards. The Qfunction therefore accounts for the future motivation dynamics. This way, an agent with motivation chooses what reward to pursue – making it also different from RL with subgoals (Sutton et al., 1999). Behavior with motivation therefore involves minimum to no handcrafted features, which suggests that motivation could provide a step towards general methods that leverage computation – a goal identified by Richard Sutton (2019).\nAs in the case of standard Q-learning, the action chosen by a rational agent maximizes the sum of the expected future perceived rewards, i.e. at ← argmaxaQ(~st, a, ~µ). To learn a correct Q-function, one can use the Temporal Difference (TD) method (Sutton & Barto (1998)). If the Q-function is learned perfectly, it satisfies the recursive relationship Q(~st, at, ~µ) = r̃(~st, ~µt) + γmaxat+1 Q(~st+1, at+1, ~µt+1). For an incompletely learned motivation-dependent Q-function, the TD error δ is non-zero:\nδ = r̃(~st, ~µt) + γmax at+1\nQ(~st+1, at+1, ~µt+1)−Q(~st, at, ~µt) (3)\nTD error can be used to update motivation-dependent Q-function directly or to train neural networks to optimize their policy. Q-function depends on the new set of variables ~µ that evolve following\ntheir own rules. These variables reflect fluctuations in physiological or psychological states that substantially change the reward function and, therefore, can generate flexible behaviors dependent on animals’ ongoing needs. We trained neural networks via backpropagation of the TD error (equation 3), an approach employed in deep Q-learning (Mnih et al., 2015). Below we present several examples in which neural networks could be trained to solve motivation-dependent tasks." }, { "heading": "2.1 THE FOUR DEMANDS TASK", "text": "Consider the example in Figure 1. An agent navigates in a 6x6 square gridworld separated into four 3x3 subdivisions (rooms) (Figure 1A). The environment was inspired by the work of Sutton et al. (1999); however, the task is different, as described below. In each room, the agent receives one and only one type of reward rn(xt, yt), where n = 1...4 (Figure 1B). These rewards can be viewed as four different resources, such as water, food, sleep, and work. Motivation is described in this system by a 4D vector ~µ defining affinity of the agent for each of these resources. When the agent enters a room number n, the corresponding resource in the room is consumed, the agent receives rewards defined by r̃t = µn, and the corresponding component of the motivation vector µn is reset to zero (Figure 1C). On the next time step, motivations in all four rooms are increased by one, i.e. µn ← µn+1, which reflects additional “wanting” of the resource induced by the “growing appetite”. After a prolonged period of building up appetite, the motivation towards a resource saturates at a fixed maximum value of θ, which becomes a parameter of this model, determining the behavior.\nWhat are the potential behaviors of the agent? Assume, that the maximum allowed motivation θ is large, and does not influence our results. If the agent always stays in the same room (one-room binge strategy, Figure 1D), the rewards received by the agent consist of a sequence of zeros and ones, i.e. 0, 1, 0, 1, . . . (in our model, after the motivation is set to zero, it is increased by one on the next time step). The average reward corresponding to this strategy is therefore r̄one−room binge = 1/2. The average reward can be increased, if the agent jumps from room to room on each time step (a two-room binge strategy, Figure 1E). In this case, the sequence of rewards received by the agent is described by the sequence of ones and the average reward is r̄two−room binge = 1. Two-room binging therefore outperforms the one-room binge strategy. Finally, the agent can migrate by moving in a cycle through all four rooms (Figure 1F). In this case, the agent spends three steps in each room and the overall period of migration is 12 steps. During these three steps, the agent receives the rewards of 9 (the agent left this room nine steps ago), then 0, and 1 (r̄migration = 10/3). Thus, migration strategy is more beneficial for the agent than both of the binging strategies. Migration, however, is affected by the maximum allowed motivation value θ. When θ < 9, the benefits of migration strategy are reduced. For θ = 1, for example, migration yields the reward rate of just r̄migration|θ=1 = 2/3 , which is below the return of the two-room binging. Thus, our model should display various behaviors depending on θ.\nWe trained a simple feedforward neural network (Figure 2A) to generate behaviors using the state vector and the 4D vector of motivations as inputs. The network computed Q-values for five possible actions (up, down, left, right, stay), using TD method and backpropagating the δ signal. The binary 36D (6x6) one-hot state vector represented the the agent’s position. The network was trained 41 times for different values of the maximum allowed motivation value θ. As expected, the behavior displayed by the network depended on this parameter. The phase diagram of the agent’s behaviors (Figure 2B, blue circles) shows that the agent successfully discovered the migration strategy and two-room binge strategies for high and low values of θ correspondingly. For intermediate values of θ (1.7 < θ < 3), the network discovered a delayed two-room binging strategy, in which it spent an extra step in one of the room. The networks with motivation can also display a variety of complex behaviors for different motivation dynamics, such as binging, addiction, withdrawal, etc. In one example, by increasing the maximum motivatiuon value for one of the demands (”smoking”), we trained networks to display ”smoking addiction” (Figure 3A,B).\nDoes motivation contribute to learning optimal strategies? To address this question, we performed a similar set of simulations, except the motivation input to the network was suppressed (µ = 0). Although the input to such “non-motivated” networks was sufficient to recover the optimal strategies, in most of the simulations the agents exercised two-room binging (Figure 2B, yellow circles). The migration strategy, despite being optimal in 3/4 of the simulations, was successfully learned only by a single agent out of 41. Moreover, the performance of the non-motivated networks often yielded that of the random walk (Figure 2B, orange circles). We conclude that motivation may facilitate learning by providing additional cues for temporal credit assignment in the rewards. Overall, we suggest that motivation is helpful in generating complex ongoing behaviors based on simple conditions." }, { "heading": "2.2 THE TRANSPORT NETWORK TASK", "text": "In the next example, the agent navigaties in a system of roads connecting N cities (Figure 4A). The goal of the agent is to visit a certain subset of the target cities. The visiting order is not important, but the agent is supposed to use the route of minimal length. This problem is similar to the vehicle routing problem (Dantzig & Ramser, 1959) (we do not require agents to return to the city of origin).\nWe trained a neural network that receives the agent’s state (position) and the motivation vector as inputs, then computes the Q-values for all available actions (connected cities) for the given position (Figure 5A). In every city, the agent receives a reward equal to the value of the motivation vector at the position of the agent. The network is also negatively rewarded at every link between cities in proportion to the length of this link. We trained the network using TD method by backpropagating the TD error. Trained neural networks produced behaviors that closely match the shortest path solution (Figure 5B). In 82% of the test examples, the agent traveled the shortest path. In the remaining cases, the paths chosen by agents are close to the shortest path solution. Overall, we suggest that networks with motivation can solve complex transport problems. In doing so, the agent is not instructed to perform any particular goal, but instead learns to set next target autonomously." }, { "heading": "2.3 RESPONSES OF THE VP NEURONS IN PAVLOVIAN CONDITIONING TASK", "text": "To explore how motivation may be implemented in the brain, we trained 3 mice to associate the specific cues (sound tones) with the different rewards (Figure 6A,B). In the experiment, the animals received one of five possible rewards: a large or small positive reward (a drop of water); a large or small negative reward (an air puff); or a zero reward – nothing at all. Trials containing positive or negative rewards combined with zero reward trials were separated into different blocks. During these blocks of trials, the animal was expected to be motivated and demotivated respectively. In course of the training, the animals learned to anticipate both positive and negative rewards.\nTo relate behavior to the underlying neuronal circuits, we recorded the activity of the neurons in the VP – a brain area implicated in computing motivation (Berridge & Schulkin, 1989). The recordings were made while the mice were performing this task (Figure 6A,B). Overall, we obtained 149 wellisolated single neurons that showed task-related responses (Figure 6C). Our data suggests that the VP contains 2 large populations of oppositely-tuned neurons, activated by positive and negative (Figure 6D,E) rewards. To gain insight into a potential explanation for this phenomenon, we investigated artificial neural networks with motivation that were subjected to similar conditions as mice.\nAs the Pavlovian conditioning task includes time as variable (Figure 6), we chose to use recurrent neural network (RNN) as a basis of our model, as suggested by Sutton & Barto (1987). The RNN received 2 inputs. One input described the cue as a function of time within a trial (Figure 7A,B) – representing the state of the animal. Another input described motivation (constant within the entire trial) to indicate whether an agent is in a positive (µ = +1) or negative (µ = −1) block of trials. The network has learned to accurately predict the trial outcome based on the cue (Figure 7B). For example, in the negative block of trials (µ = −1), before a cue is presented (s = 0), the expected value of future reward Vt(µt, st) starts from a low negative value, in an expectation of future negative reward. As the cue arrives, the expected value of future reward Vt represents the expected outcome. For example, in the trials with large negative reward (the leftmost column in Figure 7B), the network adjusts its expectation to lower value after the cue arrives (s = −0.8). For trials with small negative reward (second column), no adjustment is necessary, and, therefore, reward expectation Vt remains unaffected by the cue. Vt decreases slightly after the cue arrives due to the temporal discount γ = 0.9. For no-negative-reward trials (Figure 7B, column 3), in the negative block of trials, the expected reward increases after the cue arrives, due to the optimistic prediction. In positive block of trials (µ = +1, Figure 7B, columns 4-6), the behavior of the network is the same, except for the sign. Overall, our model yields reward expectations Vt that accurately reflect motivation and future rewards.\nWe then examined the responses of neurons in the model. We clustered the responses using unsupervised clustering algorithm (Sinakevitch et al., 2018). The neural population contained two large groups of oppositely tuned cells (Figure 7C), elevating their activity in positive and negative reward trials respectively, in agreement with the experimental observations in the brain (Figure 6C). Overall, we find a close correspondence between activity of neurons in the artificial and biological networks.\nWhat might be the functional significance of the two oppositely tuned neural populations? We found that the negative reward neurons (Figure 7D, blue cluster) tend to form excitatory connections with each other, and so do the positive reward neurons (red cluster). Oppositely tuned cell, on the other hand, tend to inhibit each other (Figure 7E,F). Thus, RNN in our model yields a prediction for the structure of connectivity in the VP in the brain. Such connectivity helps maintaining the information about reward expectation within the trial. Indeed, in the Pavlovian conditioning task, cue and reward are separated by a temporal delay. During the delay, the network is supposed to maintain the information about upcoming reward, and, thus, acts as a working memory network (Her et al., 2016), which keeps reward expectation in its persistent activity. This persistent activity can be seen in both the responses of individual neurons in the VP in the brain (Figure 6C-E) and the RNN neurons in the model (Figure 7C). Previous studies in working memory and decision-making tasks (Machens et al., 2005; Wong et al., 2007; Her et al., 2016) suggest that such parametric persistent activity can be maintained by two groups of oppositely tuned neurons, in the network architecture called\nthe “push-pull” circuit. This is exactly what we find in our RNN (Figure 7F). Memory is maintained in push-pull circuits via positive feedback. The positive feedback is produced by two forms of connectivity. First, similarly tuned neurons excite each other, as in Figure 7D. Second, oppositely tuned neurons inhibit each other, which introduces effective self-excitation via disinhibition. Overall, we show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model also generates predictions for the structure of the VP connectivity." }, { "heading": "3 DISCUSSION", "text": "Motivation has been defined previously as the need-based modulation of reward magnitude. Here we propose an RL approach to the neural networks that can be trained to include motivation into the calculation of action. We consider a diverse set of example networks that can solve different problems following a similar pattern. We train such networks using TD rule via conventional backpropagation. We find that the networks can learn optimal behaviors, including behaviors that reflect complex scenarios of future motivation changes. When compared to the responses of neurons in the mouse brain, our neural network model can accurately predict behavioral outcomes, demonstrates similar patterns of neuronal responses, and generates predictions for network connectivity.\nWe trained our networks to compute future motivation-dependent reward in the Pavlovian conditioning task. Connecting RL – and, in particular, TD methods – to Pavlovian conditioning tasks was a matter of the extensive research, reviewed by Sutton & Barto (1987). We found that the neurons in the RNNs trained to recognize motivation can be clustered into 2 oppositely tuned populations: neurons increasingly active in positive and negative reward trials. In agreement with this finding, we found similar two groups of neurons in the mouse VP: a basal ganglia region implicated in motivation-dependent estimates of reward (Richard et al., 2016). Thus, neural networks with motivation, trained to perform in realistic tasks, develop responses similar to those in the brain.\nThe recurrent network structure in this Pavlovian conditioning case is compatible with the conventional models of working memory. The information about upcoming reward – once supplied by a cue – is maintained in the network due to the positive recurrent feedback. This feedback is produced by inhibition between two oppositely tuned populations of neurons, i.e. positive and negative reward sensitive cells. Thus, the experimentally observed presence of particular neural populations may be\na consequence of the functional requirements on the network to maintain persistent variables within a trial. This function is reflected in both neural responses and architecture. Our findings present a generative hypothesis for how information about trial outcome is maintaned in the brain networks.\nIn recent work, Keramati & Gutkin (2014) show that homeostatic RL explains prominent motivationrelated behavioral phenomena including anticipatory responding (Mansfield & Cunningham, 1980), dose-dependent reinforcement and potentiating effect of deprivation (Hodos, 1961), inhibitory effect of irrelevant drives (Dickinson & Balleine, 2002), etc. Although homeostatic RL defines the rewards as the gradients of the cost function with a fixed point, the theoretical predictions generalize to the models with linear, or approximately linear, multiplicative motivation. We therefore expect the behaviors of our models to be consistent with the large body of experimental data mentioned above.\nMotivation offers a framework compatible with other methods in machine learning, such as Rlearning, goal-conditioned RL, and hierarchical RL (HRL). In R-learning, (Sinakevitch et al., 2018; Schwartz, 1993), the cumulative sum of future rewards is computed with respect to the average level. The average reward level is a slowly changing variable computed across several trials, which makes it similar to motivation. In goal-conditioned RL – the closest counterpart to RL with motivation – the Q-function depends on three parameters: Q(~st, at, g), where g is the current static goal. In the motivation framework, multiple dynamic goals are present at the same time, and it is up to an agent to decide which one to pursue. HRL methods include the options framework (Sutton & Barto, 1998; Sutton et al., 1999), RL with subgoals (Sutton et al., 1999), feudal RL (Dayan & Hinton, 2000; Bacon & Precup, 2018), and others. In HRL, complex tasks are solved by breaking them into smaller, more manageable pieces. HRL approaches have several advantages compared to traditional RL, such as transfer of knowledge from already learned tasks and the ability to faster learn solutions to complex tasks. Although HRL methods are computationally efficient and generate behaviors separated into multiple levels of organization – which resemble animals’ behavior – a mapping of HRL methods to brain networks is missing. Here, we suggest that motivation offers a way for HRL algorithms to be implemented in the brain. In case of motivation, both manager and lower-level actor nerworks receive the same reward, which makes motivated networks different from e.g. their feudal counterparts (Dayan & Hinton, 2000; Bacon & Precup, 2018).\nAs described above, actions in the motivation-based RL are selected on the basis of Q-function Q(st, a, µ). An action at selected at certain time maximizes the Q-function, representing the total expected future reward, and leads to the transition of the agent to the new state: st → at → st+1. Because of the dependence of the Q-function on motivation, the action choice depends on the variable µ representing motivation in our framework. We argued above that motivation allows RL to have the flexibility of a rapid change in behavioral policy when the need of an animal fluctuates. The same mechanism can be used to implement HRL, if motivation µ is supplied by another, higher-level ”manager” network with its own Q-function, Q(1)(µt, a(1), µ(1)). When the higher-level network picks an action a(1)t , it leads to a change in the motivational state for the lower-level network: µt → a (1) t → µt+1 thus rapidly changing the behavior of the latter. The ”manager” network could on its own be controlled by a higher-level manager via its own motivation µ(1). Such decision hierarchy may include several management levels, with the dynamics of motivation on level l determined via Q-function computed on level l + 1: Q(l+1)(µ(l)t , a (l+1), µ(l+1)) and µ(l)t → a (l+1) t → µ (l) t+1. Although HRL is outside the scope of this project, we suggest that motivation-based RL studied here may link the neurobiology of adaptive behaviors to developments in machine learning.\nOverall, we suggest that motivation-based networks may generate complex ongoing behaviors that can adapt to dynamic changes in an organism’s demands. Thus, neural networks with motivation can both encompass more complex behaviors than networks with a fixed reward function and be mapped onto animals’s circuits that control rewarded behaviors. Since animal performance in realistic conditions depends on the states of satiety, wakefulness, etc., our approach should help build more realistic computational models that include these variables. Importantly, when we compared the responses of neurons in the mouse brain to our model, our neural network model can accurately predict behavioral outcomes, demonstrates similar patterns of neuronal responses, and generates predictions for network connectivity. In particular, our model explains why basal ganglia neurons form two classes: tuned to positive and negative rewards. In our model, these classes emerge from the need to maintain the information about future beward within the trial using positive recurrent feedback. Thus, networks with motivation considered here give imporant insights into the mechanisms of signal processing in brain reward circuits." }, { "heading": "A APPENDIX – METHODS", "text": "A.1 THE FOUR DEMANDS TASK\nTo optimize the behaviors in the Four Demands task, we implemented a feedforward neural network as described below. On the input, the network received an agent’s state and motivation. The state variable contained an agent’s position, which was represented by a 36-dimensional one-hot vector. The motivation was represented by a 4-dimensional integer vector. From both state and motivation variables, we subtracted the mean values. To balance the contributions of state and motivation to the network, we normalized their variances to 1 and 9 respectively, since the ratio of the number of these variables is 4/36 (in case of non-motivated agents, we set the motivation variable to zero). The inputs of the network were propagated through three hidden layers (100 sigmoid units each), and an output layer (5 linear units). We trained the network to compute the Q-values of the possible actions: to move left, right, up, down, or to stay.\nOn every iteration, we picked an action, corresponding to the largest network output (Q-value). With probability ε, we replaced the selected action with a random action (ε-greedy policy; ε decreased exponentially from 0.5 to 0.05 throughout simulation; in case of random walk agents, we set ε = 1). If the selected action resulted in a step through a “wall”, the position remained unchanged; otherwise we updated the agent’s position. For the agent’s new position, we computed the perceived reward (~r · ~µT ), and used Bellman equation (γ = 0.9) to compute TD error. We then backpropagated the TD error through the network to update its weights (initialized using Xavier rule). We performed 4 · 105 training iterations with the learning rate decreasing exponentially from 3 · 10−3 to 3 · 10−5. We trained the network using various motivation schedules as follows. Each component of the motivation was increased by one on every iteration. If a component of motivation µn reached the threshold θn, we stopped increasing this component any further. If the reward of a type n was consumed on current iteration, we dropped the corresponding component of motivation µn to zero. For motivated, non-motivated, and random walk agents, we trained 41 model each (123 models total) with motivation thresholds θ1 = θ2 = θ3 = θ4 ranging from 1 to 100, spaced exponentially, one training run per unique θ value. To mimic addiction, we also trained a model with θ1 = θ2 = θ3 = 1, and θ4 = 10. For each run, we displayed sequences of agent’s locations to establish correspondence between policies and average reward rates.\nA.2 THE TRANSPORT NETWORK TASK\nTo build an environment for the transport network task, we defined the locations for 10 ”cities” by sampling x and y coordinates from the standard normal distribution N(0, 1). For these locations, we computed Delaunay triangulation to define a network of the roads between the cities. For each road (Delaunay graph edge), we computed its length – the Euclidean distance between two cities it connects. We then selected multiple random subsets of 3 cities to be visited by an agent: the training set (104 target subsets), and the testing set (50 different target subsets).\nTo navigate the transport network, we implemented a feedforward neural network as described below. On the input, the network received an agent’s state and motivation. The state variable contained an agent’s position, which was represented by a 10-dimensional one-hot vector. The motivation was represented by a 10-dimensional binary vector. To specify the agent’s targets, we initialized the motivation vector with 3 non-zero components µi1 ...µi3 , corresponding to the target cities i1...i3. The inputs of the network were propagated through a hidden layer (200 Leaky ReLU units; leak α = 0.01), and an output layer (10 linear units). We trained the network to compute the Q-values of the potential actions (visiting each of the cities).\nOn every iteration within a task episode, we picked an action to go from the current city to one of the immediately connected cities, then we updated the current position. To choose the action, we used the softmax policy (β = 0.5) over the Q-values of the available moves. When the motivation µj towards the new position j was non-zero, we yielded the reward of 5, and dropped the motivation µj to 0. On every iteration, we reduced the reward by the distance travelled within this iteration. The task episode terminated when all the components of motivation were equal to zero. On every iteration, we used Bellman equation (γ = 0.9) to compute the TD error. We backpropagated the TD error through the network to update its weights (initialized using Xavier rule). Overall, we performed training on 104 task episodes with the learning rate 10−2. To assess the model performance, we evaluated the model on the testing set and compared the resulting path lengths to the precomputed shortest path solutions.\nA.3 PAVLOVIAN CONDITIONING TASK\nTo build a circuit model of motivation in Pavlovian conditioning task, we implemented a recurrent neural network. We trained the network on terminating sequences of 20 iterations, representing time within individual trials. On the input, the network received an agent’s state and motivation. The state variable contained a cue (conditioned stimulus; CS), which we chose randomly from {±0.0,±0.4,±0.8}. Depending on iteration, the state variable was equal either to the CS (iterations 6-9 out of 20), or to zero (elsewhere). The motivation variable µ = ±1 was equal to the sign of the CS; it was constant throughout the entire sequence of 20 iterations. The inputs of the network were propagated through a recurrent layer (40 sigmoid units), and an output layer (1 linear unit). We trained the network to compute the V-values for each iteration within the sequence.\nOn every iteration, we computed a reward, reflecting the reward (unconditioned stimulus; US). Depending on iteration, the reward was equal either to the CS (iterations 15-16), or to zero (elsewhere). We used the rewards in Bellman equation γ = 0.9 to compute a TD error for every iteration. We then backpropagated the TD errors through time to update the network’s weights (initially drawn from the uniform distribution U(−10−5, 10−5)). We performed training on 3 · 105 minibatches of 20 sequences each, with the learning rate of 10−1.\nWe then clustered the recurrent neurons after training as follows. First, for every neuron we computed 6 average activations, corresponding to the unique types of trials (positive/negative motivation with zero/small/large reward). Then, we used the average activations to compute a correlation matrix for the neurons. Finally, we processed the correlation matrix with the watershed algorithm (markerbased; h = 0.04), hence clustered the recurrent neurons. To examine the connectivity between the clusters, we used the weights of the recurrent neurons to compute a new correlation matrix. We then applied t-SNE in 3 dimensions (p = 30), and color-coded the neurons with respect to the clusters." } ]
2,019
null
SP:5ca4c62eae1c6a5a870524715c3be44c40383f98
[ "The paper presents an algorithm to match two distributions with latent variables, named expected information maximization (EIM). Specifically, EIM is based on the I-Projection, which basically is equivalent to minimizing the reverse KL divergence (i.e. min KL[p_model || p_data]); to handle latent variables, an upper-bound is derived, which is the corresponding reverse KL divergence in the joint space. To minimize that joint reverse KL, a specific procedure is developed, leading to the presented EIM. EIM variants for different applications are discussed. Fancy robot-related experiments are used to evaluate the presented algorithm.", "This paper propose EIM an analog to EM but to perform the I-projection (i.e. reverse-KL) instead of the usual M-projection for EM. The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering. The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p." ]
Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-projection forces the model to average over modes it cannot represent. In contrast, the I(nformation)-projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the I-projection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets.
[ { "affiliations": [], "name": "Philipp Becker" }, { "affiliations": [], "name": "Oleg Arenz" } ]
[ { "authors": [ "Abbas Abdolmaleki", "Rudolf Lioutikov", "Jan R Peters", "Nuno Lau", "Luis Pualo Reis", "Gerhard Neumann" ], "title": "Model-based relative entropy stochastic search", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Syed Mumtaz Ali", "Samuel D Silvey" ], "title": "A general class of coefficients of divergence of one distribution from another", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1966 }, { "authors": [ "O. Arenz", "M. Zhong", "G. Neumann" ], "title": "Efficient gradient-free variational inference using policy search", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Christopher M. Bishop" ], "title": "Pattern Recognition and Machine Learning (Information Science and Statistics)", "venue": null, "year": 2006 }, { "authors": [ "Lev M Bregman" ], "title": "The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming", "venue": "USSR computational mathematics and mathematical physics,", "year": 1967 }, { "authors": [ "Liqun Chen", "Shuyang Dai", "Yunchen Pu", "Erjin Zhou", "Chunyuan Li", "Qinliang Su", "Changyou Chen", "Lawrence Carin" ], "title": "Symmetric variational autoencoder and connections to adversarial learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Arthur P Dempster", "Nan M Laird", "Donald B Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society", "venue": "Series B (methodological),", "year": 1977 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Jean-Baptiste Hiriart-Urruty", "Claude Lemaréchal" ], "title": "Fundamentals of convex analysis", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "Sham M Kakade" ], "title": "A natural policy gradient", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Solomon Kullback", "Richard A Leibler" ], "title": "On information and sufficiency", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Chunyuan Li", "Ke Bai", "Jianqiao Li", "Guoyin Wang", "Changyou Chen", "Lawrence Carin" ], "title": "Adversarial learning of a sampler based on an unnormalized distribution", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Lars Maaløe", "Casper Kaae Sønderby", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Auxiliary deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Manfred Opper", "David Saad" ], "title": "Advanced mean field methods: Theory and practice", "venue": "MIT press,", "year": 2001 }, { "authors": [ "Alex Pentland", "Andrew Liu" ], "title": "Modeling and prediction of human behavior", "venue": "Neural Comput.,", "year": 1999 }, { "authors": [ "Ben Poole", "Alexander A Alemi", "Jascha Sohl-Dickstein", "Anelia Angelova" ], "title": "Improved generator objectives for gans", "venue": "arXiv preprint arXiv:1612.02780,", "year": 2016 }, { "authors": [ "Rajesh Ranganath", "Dustin Tran", "David Blei" ], "title": "Hierarchical variational models", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Alexandre Robicquet", "Amir Sadeghian", "Alexandre Alahi", "Silvio Savarese" ], "title": "Learning social etiquette: Human trajectory understanding in crowded scenes", "venue": null, "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Masashi Sugiyama", "Taiji Suzuki", "Takafumi Kanamori" ], "title": "Density-ratio matching under the bregman divergence: a unified framework of density-ratio estimation", "venue": "Annals of the Institute of Statistical Mathematics,", "year": 2012 }, { "authors": [ "Masatoshi Uehara", "Issei Sato", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ], "title": "Generative adversarial nets from a density ratio estimation perspective", "venue": "arXiv preprint arXiv:1610.02920,", "year": 2016 }, { "authors": [ "GAN. Nowozin" ], "title": "2016) propose the following adversarial objective, based on a variatonal bound for f -divergences", "venue": "(Nguyen et al.,", "year": 2010 }, { "authors": [ "Nowozin" ], "title": "g’s for various f− divergences and chose exclusively monotony increasing functions which output large values for samples that are believed to be from the data distribution. For the I-projection they suggest g(v) = −exp(−v). Thus the f -GAN objective for the I-projection is given by argminq(x)argmaxV", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning the density of highly multi-modal distributions is a challenging machine learning problem relevant to many fields such as modelling human behavior (Pentland & Liu, 1999). Most common methods rely on maximizing the likelihood of the data. It is well known that the maximum likelihood solution corresponds to computing the M(oment)-projection of the data distribution to the parametric model distribution (Bishop, 2006). Yet, the M-projection averages over multiple modes in case the model distribution is not rich enough to fully represent the data (Bishop, 2006). This averaging effect can result in poor models, that put most of the probability mass in areas that are not covered by the data. The counterpart of the M-projection is the I(nformation)-projection. The I-projection concentrates on the modes the model is able to represent and ignores the remaining ones. Hence, it does not suffer from the averaging effect (Bishop, 2006).\nIn this paper, we explore the I-projection for mixture models which are typically trained by maximizing the likelihood via expectation maximization (EM) (Dempster et al., 1977). Despite the richness of mixture models, the averaging problem remains as we typically do not know the correct number\nof modes and it is hard to identify all modes of the data correctly. By the use of the I-projection, our mixture models do not suffer from this averaging effect and can generate more realistic samples that are less distinguishable from the data. In this paper we concentrate on learning Gaussian mixture models and conditional Gaussian mixtures of experts (Jacobs et al., 1991) where the mean and covariance matrix are generated by deep neural networks.\nWe propose Expected Information Maximization (EIM) 1, a novel approach capable of computing the I-projection between the model and the data. By exploiting the structure of the I-projection, we can derive a variational upper bound objective, which was previously used in the context of variational inference (Maaløe et al., 2016; Ranganath et al., 2016; Arenz et al., 2018). In order to work with this upper bound objective based on samples, we use a discriminator to approximate the required density ratio, relating our approach to GANs (Goodfellow et al., 2014; Nowozin et al., 2016; Uehara et al., 2016). The discriminator also allows us to use additional discriminative features to improve model quality. In our experiments, we demonstrate that EIM is much more effective in computing the I-projection than recent GAN approaches. We apply EIM to a synthetic obstacle avoidance task, an inverse kinematic task of a redundant robot arm as well as a pedestrian and car prediction task using the Stanford Drone Dataset (Robicquet et al., 2016) and a traffic dataset from the Next Generation Simulation program." }, { "heading": "2 PRELIMINARIES", "text": "Our approach heavily builds on minimizing Kullback-Leibler divergences as well as the estimation of density ratios. We will therefore briefly review both concepts.\nDensity Ratio Estimation. Our approach relies on estimating density ratios r(x) = q(x)/p(x) based on samples of q(x) and p(x). Sugiyama et al. (2012) introduced a framework to estimate such density ratios based on the minimization of Bregman divergences (Bregman, 1967). For our work we employ one approach from this framework, namely density ratio estimation by binary logistic regression. Assume a logistic regressor C(x) = σ(φ(x)) with logits φ(x) and sigmoid activation function σ. Further, we train C(x) on predicting the probability that a given sample x was sampled from q(x). It can be shown that such a logistic regressor using a cross-entropy loss is optimal for C(x) = q(x)/ ( q(x) + p(x) ) . Using this relation, we can compute the log density ratio estimator by\nlog q(x)\np(x) = log\nq(x)/ ( q(x) + p(x) ) p(x)/ ( q(x) + p(x) ) = log C(x) 1− C(x) = σ −1(C(x)) = φ(x). The logistic regressor is trained by minimizing the binary cross-entropy\nargminφ(x)BCE(φ(x), p(x), q(x)) = −Eq(x) [log (σ(φ(x)))]− Ep(x) [log (1− σ(φ(x)))] ,\nwhere different regularization techniques such as `2 regularization or dropout (Srivastava et al., 2014) can be used to avoid overfitting.\n1Code available at https://github.com/pbecker93/ExpectedInformationMaximization\nMoment and Information Projection. The Kullback-Leibler divergence (Kullback & Leibler, 1951) is a standard similarity measure for distributions. It is defined as KL (p(x)||q(x)) =∫ p(x) log p(x)/q(x)dx. Due to its asymmetry, the Kullback-Leibler Divergence provides two different optimization problems (Bishop, 2006) to fit a model distribution q(x) to a target distribution p(x), namely\nargminq(x)KL (p(x)||q(x))︸ ︷︷ ︸ Moment-projection and argminq(x)KL (q(x)||p(x))︸ ︷︷ ︸ Information-projection .\nHere, we will assume that p(x) is the data distribution, i.e., p(x) is unknown but we have access to samples from p(x). It can easily be seen that computing the M-projection to the data distribution is equivalent to maximizing the likelihood (ML) of the model (Bishop, 2006). ML solutions match the moments of the model with the moments of the target distribution, which results in averaging over modes that can not be represented by the model. In contrast, the I-projection forces the learned generator q(x) to have low probability whenever p(x) has low probability, which is also called zero forcing." }, { "heading": "3 RELATED WORK", "text": "We will now discuss competing methods for computing the I-projection that are based on GANs. Those are, to the best of our knowledge, the only other approaches capable of computing the Iprojection solely based on samples of the target distribution. Furthermore, we will distinguish our approach from approaches based on variational inference that also use the I-projection.\nVariational Inference. The I-projection is a common objective in Variational Inference (Opper & Saad, 2001; Bishop, 2006; Kingma & Welling, 2013). Those methods aim to fit tractable approximations to intractable distributions of which the unnormalized density is available. EIM, on the other hand, does not assume access to the unnormalized density of the target distributions but only to samples. Hence, it is not a variational inference approach, but a density estimation approach. However, our approach uses an upper bound that has been previously applied to variational inference (Maaløe et al., 2016; Ranganath et al., 2016; Arenz et al., 2018). EIM is especially related to the VIPS algorithm (Arenz et al., 2018), which we extend from the variational inference case to the density estimation case. Additionally, we introduce conditional latent variable models into the approach.\nGenerative Adversarial Networks. While the original GAN approach minimizes the JensenShannon Divergence (Goodfellow et al., 2014), GANs have since been adapted to a variety of other distance measures between distributions, such as the Wasserstein distance (Arjovsky et al., 2017), symmetric KL (Chen et al., 2018) and arbitrary f -divergences (Ali & Silvey, 1966; Nowozin et al., 2016; Uehara et al., 2016; Poole et al., 2016). Since the I-projection is a special case of an f - divergence, those approaches are of particular relevance to our work. Nowozin et al. (2016) use a variational bound for f -divergences (Nguyen et al., 2010) to derive their approach, the f -GAN. Uehara et al. (2016) use a bound that directly follows from the density ratio estimation under Bregman divergences framework introduced by Sugiyama et al. (2012) to obtain their b-GAN. While the b-GAN’s discriminator directly estimates the density ratio, the f -GAN’s discriminator estimates an invertible mapping of the density ratio. Yet, in the case of the I-projection, both the f -GAN and the b-GAN yield the same objective, as we show in Appendix C.2. For both the f -GAN and b-GAN the desired f -divergence determines the discriminator objective. Uehara et al. (2016) note that the discriminator objective, implied by the I-projection, is unstable. As both approaches are formulated in a general way to minimize any f -divergence, they do not exploit the special structure of the Iprojection. Exploiting this structure permits us to apply a tight upper bound of the I-projection for latent variable models, which results in a higher quality of the estimated models.\nLi et al. (2019) introduce an adversarial approach to compute the I-projection based on density ratios, estimated by logistic regression. Yet, their approach assumes access to the unnormalized target density, i.e., they are working in a variational inference setting. The most important difference to GANs is that we do not base EIM on an adversarial formulation and no adversarial game has to be solved. This removes a major source of instability in the training process, which we discuss in more detail in Section 4.3." }, { "heading": "4 EXPECTED INFORMATION MAXIMIZATION", "text": "Expected Information Maximization (EIM) is a general algorithm for minimizing the I-projection for any latent variable model. We first derive EIM for general marginal latent variable models, i.e., q(x) = ∫ q(x|z)q(z)dz and subsequently extend our derivations to conditional latent variable mod-\nels, i.e., q(x|y) = ∫ q(x|z, y)q(z|y)dz. EIM uses an upper bound for the objective of the marginal distribution. Similar to Expectation-Maximization (EM), our algorithm iterates between an M-step and an E-step. In the corresponding M-step, we minimize the upper bound and in the E-step we tighten it using a variational distribution." }, { "heading": "4.1 EIM FOR LATENT VARIABLE MODELS", "text": "The I-projection can be simplified using a (tight) variational upper bound (Arenz et al., 2018) which can be obtained by introducing an auxiliary distribution q̃(z|x) and using Bayes rule\nKL (q(x)||p(x)) = Uq̃,p(q)︸ ︷︷ ︸ upper bound −Eq(x)[KL (q(z|x)||q̃(z|x))︸ ︷︷ ︸ ≥0 ],\nwhere\nUq̃,p(q) =\n∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x)\n− log q̃(z|x) ) dzdx. (1)\nThe derivation of the bound is given in Appendix B. It is easy to see that Uq̃,p(q) is an upper bound as the expected KL term is always non-negative. In the corresponding E-step, the model from the previous iteration, which we denote as qt(x), is used to tighten the bound by setting q̃(z|x) = qt(x|z)qt(z)/qt(x). In the M-step, we update the model distribution by minimizing the upper bound Uq̃,p(q). Yet, opposed to Arenz et al. (2018), we cannot work directly with the upper bound since it still depends on log p(x), which we cannot evaluate. However, we can reformulate the upper bound by setting the given relation for q̃(z|x) of the E-step into Eq. 1,\nUqt,p(q) =\n∫ q(z) (∫ q(x|z) log qt(x)\np(x) dx+ KL (q(x|z)||qt(x|z))\n) dz + KL (q(z)||qt(z)) . (2)\nThe upper bound now contains a density ratio between the old model distribution and the data. This density ratio can be estimated using samples of qt and p, for example, by using logistic regression as shown in Section 2. We can use the logits φ(x) of such a logistic regressor to estimate the log density ratio log(qt(x)/p(x)) in Equation 2. This yields an upper bound Uqt,φ(q) that depends on φ(x) instead of p(x). Optimizing this bound corresponds to the M-step of our approach. In the E-step, we set qt to the newly obtained q and retrain the density ratio estimator φ(x). Both steps formally result in the following bilevel optimization problem qt+1 ∈ argminq(x)Uqt,φ∗(q) s.t. φ∗(x) ∈ argminφ(x)BCE(φ(x), p(x), qt(x)). Using a discriminator also comes with the advantage that we can use additional discriminative features g(x) as input to our discriminator that are not directly available for the generator. For example, if x models trajectories of pedestrians, g(x) could indicate whether the trajectory reaches any positions that are not plausible such as rooftops or trees. These features simplify the discrimination task and can therefore improve our model accuracy which is not possible with M-projection based algorithms such as EM." }, { "heading": "4.2 EIM FOR CONDITIONAL LATENT VARIABLE MODELS", "text": "For conditional distributions, we aim at finding the conditional I-projection argminq(x|y)Ep(y) [KL (q(x|y)||p(x|y))] .\nThe derivations for the conditional upper bound follow the same steps as the derivations in the marginal case, where all distributions are extended by the context variable y. We refer to the supplement for details. The log density ratio estimator φ(x, y) now discriminates between samples of the joint distribution of x and y. For training φ(x, y) we generate a new sample x for each context y, using the distribution qold(x|y). Hence, as the context distribution is the same for the true data and the generated data, the log density ratio of the conditional distributions is equal to the log density ratio of the joint distributions." }, { "heading": "4.3 RELATION TO GANS AND EM", "text": "There is a close relation of EIM to GANs due to the use of a logistic discriminator for the density ratio estimation. It is therefore informative to investigate the differences in the case without latent variables. In an adversarial formulation, the density ratio estimator would directly replace the density ratio in the original I-projection equation, i.e.,\nargminq(x) ∫ q(x)φ∗(x)dx s.t. φ∗(x) ∈ argminφ(x)BCE(φ(x), p(x), q∗(x)).\nHowever, such adversarial games are often hard to optimize. In contrast, EIM offers a bilevel optimization problem where the discriminator is explicitly learned on the old data distribution qt(x),\nargminq(x) ∫ q(x)φ∗(x)dx+ KL (q(x)||qt(x)) s.t. φ∗(x) ∈ argminφ(x)BCE(φ(x), p(x), qt(x)).\nThus, there is no circular dependency between the optimal generator and the optimal discriminator. Figure 2 illustrates that the proposed non-adversarial formulation does not suffer from too large model updates. Choosing the number and step-size of the updates is thus far less critical.\nEIM can also be seen as the counter-part of Expectation-Maximization (EM). While EM optimizes the M-projection with latent variable models, EIM uses the I-projection. However, both approaches decompose the corresponding projections into an upper bound (or lower bound for EM) and a KLterm that depends on the conditional distribution q(z|x) to tighten this bound. The exact relationship is discussed in Appendix C.1." }, { "heading": "4.4 EIM FOR GAUSSIAN MIXTURES MODELS", "text": "We consider Gaussian mixture models with d components, i.e., multivariate Gaussian distributions q(x|zi) = N (µi,Σi) and a categorical distribution q(z) = Cat(π) for the coefficients. As the latent distribution q(z) is discrete, the upper bound in EIM (Equation 2) simplifies, as the integral over z can be written as a sum. Similar to the EM-algorithm, this objective can be updated individually for the coefficients and the components. For both updates, we will use similar update rules as defined in the VIPS algorithm (Arenz et al., 2018). VIPS uses a trust region optimization for the components and the coefficients, where both updates can be solved in closed form as the components are Gaussian. The trust regions prevent the new model from going too far away from qt where the density ratio estimator is inaccurate, and hence, further stabilize the learning process. We will now sketch both updates, where we refer to Appendix B.2 for the full details.\nFor updating the coefficients, we assume that the components have not yet been updated, and therefore KL (q(x|zi)||qt(x|zi)) = 0 for all zi. The objective for the coefficients thus simplifies to\nargminq(z) d∑ i=1 q(zi)φ(zi) + KL (q(z)||qt(z)) with φ(zi) = Eq(x|zi) [φ(x)] , (3)\nwhere φ(zi) can be approximated using samples from the corresponding component. This objective can easily be optimized in closed form, as shown in the VIPS algorithm (Arenz et al., 2018). We also use a KL trust-region to specify the step size of the update. For updating the individual components, the objective simplifies to\nargminq(x|zi)Eq(x|zi) [φ(x)] + KL (q(x|zi)||qt(x|zi)) . (4) As in VIPS, this optimization problem can be solved in closed form using the MORE algorithm (Abdolmaleki et al., 2015). The MORE algorithm uses a quadratic surrogate function that locally approximates φ(x). The resulting solution optimizes Equation 4 under a KL trust-region. The pseudo-code of EIM for GMMs can be found in Appendix A." }, { "heading": "4.5 EIM FOR GAUSSIAN MIXTURES OF EXPERTS", "text": "In the conditional case, we consider mixtures of experts consisting of d multivariate Gaussians, whose parameters depend on an input y in a nonlinear fashion, i.e., q(x|zi,y) = N (ψµ,i(y), ψΣ,i(y)) and the gating is given by a neural network with softmax output. We again decompose the resulting upper bound into individual update steps for the components and the gating. Yet, closed-form solutions are no longer available and we need to resort to gradient-based updates. The objective for updating the gating is given by\nargminq(z|y) d∑ i=1 ( Ep(y)q(x|zi,y) [q(zi|y)φ(x,y)] ) + Ep(y) [KL (q(z|y)||qt(z|y))] . (5)\nWe minimize this equation w.r.t. the parameters of the gating by gradient descent using the Adam (Kingma & Ba, 2014) algorithm. The objective for updating a single component i is given by\nargminq(x|zi,y)Ep̃(y|zi) [ Eq(x|zi,y) [φ(x,y)] + KL (q(x|zi,y)||qt(x|zi,y)) ] , (6)\nwhere p̃(y|zi) = p(y)q(zi|y)/q(zi). Note that we normalized the objective by q(zi) =∫ p(y)q(zi|y)dy to ensure that also components with a low prior q(zi) get large enough gradients for the updates. As we have access to the derivatives of the density ratio estimator w.r.t. x, we can optimize Equation 6 with gradient descent using the reparametrization trick (Kingma & Welling, 2013) and Adam." }, { "heading": "5 EVALUATION", "text": "We compare our approach to GANs and perform an ablation study on a toy task, with data sampled from known mixture models. We further apply our approach to two synthetic datasets, learning the joint configurations of a planar robot as well as a non-linear obstacle avoidance task, and two real datasets, namely the Stanford Drone Dataset (Robicquet et al., 2016) and a traffic dataset from the Next Generation Simulation program. A full overview of all hyperparameters and network architectures can be found in Appendix E." }, { "heading": "5.1 COMPARISON TO GENERATIVE ADVERSARIAL APPROACHES AND ABLATION STUDY", "text": "We compare to the f -GAN which is the only other method capable of minimizing the I-projection solely based on samples. We use data sampled from randomly generated GMMs with different numbers of components and dimensionalities. To study the influence of the previously mentioned differences of EIM to generative adversarial approaches, we also perform an ablation study. We compare to a version of EIM where we neglect the additional KL-term (EIM, no KL), a version were we trained all components and the coefficients jointly using gradient descent (Joint EIM), and a version where we do both (Joint EIM, no KL). The average I-projection achieved by the various approaches can be found in Figure 3." }, { "heading": "5.2 LINE REACHING WITH PLANAR ROBOT", "text": "We extended the introductory example of the planar reaching task and collected expert data from a 10-link planar robot tasked with reaching a point on a line. We fitted GMMs with an increasing\nnumber of components using EIM, EIM with additional features, where the end-effector coordinates for a given joint configuration were provided, and EM. Even for a large number of components, we see effects similar to the introductory example, i.e., the M-projection solution provided by EM fails to reach the line while EIM manages to do so. For small numbers, EIM ignores parts of the line, while more and more parts of it get covered as we increase the number of components. With the additional features, the imitation of the line reaching was even more accurate. See Figure 4, for the average distance between the end-effector and the line as well as samples from both EM and EIM." }, { "heading": "5.3 PEDESTRIAN AND TRAFFIC PREDICTION", "text": "We evaluated our approach on data from the Stanford Drone Dataset (SDD) (Robicquet et al., 2016) and a traffic dataset from the Next Generation Simulation (NGS) program2. The SDD data consists of trajectories of pedestrians, bikes, and cars and we only used the data corresponding to a single video of a single scene (Video 1, deathCircle). The NGS data consists of trajectories of cars where we considered the data recorded on Lankershim Boulevard. In both cases we extracted trajectories of length 5, yielding highly multimodal data due to pedestrians, bikes, and cars moving at different speeds and in different directions. We evaluated on the achieved log-likelihood of EIM and EM, see Figure 5. EM achieves the highest likelihood as it directly optimizes this measure. However, we can already see that EM massively overfits when we increase the number of components as the test-set likelihood degrades. EIM, on the other hand, produced better models with an increasing number of components. Additionally, we generated a mask indicating whether a given point is on the road or\n2 https://data.transportation.gov/Automobiles/Next-Generation-Simulation-NGSIM-Vehicle-Trajector/8ect-6jqj\nnot and evaluated how realistic the learned models are by measuring the amount of samples violating the mask, i.e., predicting road users outside of the road. We also evaluate a version of EIM where we provide additional features indicating if the mask is violated. EIM achieves a much better value on this mask for the NGS dataset, while we needed the additional mask features for the discriminator on the SDD dataset to outperform EM. Both experiments show that EIM can learn highly multi-modal density estimates that produce more realistic samples than EM. They further show that the models learned by EIM can be refined by additional prior knowledge provided as feature vectors." }, { "heading": "5.4 OBSTACLE AVOIDANCE", "text": "We evaluate the conditional version of EIM on an artificial obstacle avoidance task. The context contains the location of three obstacles within an image. The gating, as well as the components, are given by deep neural networks. Details about the network architectures can be found in the Appendix. The data consists of trajectories going from the left to the right of the image. The trajectories are defined by setting 3 via-points such that no obstacle is hit. To generate the data we sample via-points over and under the obstacles with a probability proportional to the distance between the obstacle and the image border. Hence, for three obstacles, there are 23 = 8 different modes in the data. Note that, like in most real-world scenarios, the expert data is not perfect, and about 13% of the trajectories in the dataset hit an obstacle. We fit models with various numbers of components to this data using EIM and EM and compare their performance on generating trajectories that achieve the goal. Results are shown in Figure 6 together with a visualization of the task and samples produced by EIM and EM. EIM was able to identify most modes for the different given inputs and did not suffer from any averaging effect. In contrast, EM does not find all modes. As a consequence, some\ncomponents of the mixture model had to average over multiple modes, resulting in poor quality trajectories." }, { "heading": "6 CONCLUSION", "text": "We introduced Expected Information Maximization (EIM), a novel approach for computing the Iprojection between general latent variable models and a target distribution, solely based on samples of the latter. General upper bound objectives for marginal and conditional distributions were derived, resulting in an algorithm similar to EM, but tailored for the I-projection instead of the M-projection. We introduced efficient methods to optimize these upper bound objectives for mixture models. In our experiments, we demonstrated the benefits of the I-projection for different behavior modelling tasks. The introduced approach opens various pathways for future research. While we focused on mixture models, the derived upper bounds are not exclusive to those and can be used for arbitrary latent variable models. Another possibility is an online adaptation of the number of used components. Arenz et al. (2018) propose heuristics for such an adaptation in their VIPS approach. Those could easily be adapted to our approach." }, { "heading": "A PSEUDO CODE", "text": "EIM-for-GMMs({x(j)p }j=1···N , q(x)); Input: Data {x(j)p }j=1···N , Initial Model q(x) = ∑d i=1 q(x|zi)q(zi) = ∑d i=1 πiN (x|µi,Σi) for i in number of iterations do E-Step: qt(z) = q(z), qt(x|zi) = q(x|zi) for all components i Update Density Ratio Estimator: sample data from model {x(j)q }j=1···N ∼ qt(x) retrain density ratio estimator φ(x) on {x(j)p }j=1···N and {x(j)q }j=1···N M-Step Coefficients: for i in number of components do\ncompute loss li = 1\nN\n∑N j=1 φ ( x (j) q ) with samples {x(j)q }j=1···N ∼ qt(x|zi)\nend update q(z) using losses li and MORE equations M-Step Components: for i in number of components do\nfit φ̂(x) surrogate to pairs ( x (j) q , φ ( x (j) q )) with samples {x(j)q }j=1···N ∼ qt(x|zi)\nupdate q(x|zi) using surrogate φ̂(x) and MORE equations end\nend Algorithm 1: Expected Information Maximization for Gaussian Mixture Models.\nPseudo-code for EIM for GMMs can be found in algorithm 1" }, { "heading": "B DERIVATIONS", "text": "Derivations of the upper bound stated in Equation 1. We assume latent variable models q(x) =∫ q(x|z)q(z)dz and use the identities q(x|z)q(z) = q(z|x)q(x) and log q(x) = log q(x|z)q(z) − log q(z|x).\nKL (q(x)||p(x)) = ∫ q(x) log q(x)\np(x) dx =\n∫∫ q(x|z)q(z) log q(x)\np(x) dzdx\n= ∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x)\n− log q(z|x) ) dzdx\n= ∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x)\n− log q(z|x) + log q̃(z|x)− log q̃(z|x) ) dzdx\n= ∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x)\n− log q̃(z|x) ) dzdx− ∫∫ q(x|z)q(z) (log q(z|x)− log q̃(z|x)) dzdx\n= ∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x)\n− log q̃(z|x) ) dzdx− ∫ q(x) ∫ q(z|x) log q(z|x)\nq̃(z|x)dzdx\n=U(q, q̃, p)− Eq(x) [KL (q(z|x)||q̃(z|x))] .\nAfter plugging the E-Step, i.e., q̃(z|x) = qt(x|z)qt(z)/qt(x), into the objective it simplifies to\nU(q, q̃, p)\n= ∫∫ q(x|z)q(z) ( log\nq(x|z)q(z) p(x) − log qt(x|z)qt(z) qt(x)\n) dzdx\n= ∫∫ q(x|z)q(z) (log q(x|z) + log q(z)− log p(x)− log qt(x|z)− log qt(z) + log qt(x)) dzdx\n= ∫∫ q(x|z)q(z) ( log qt(x)\np(x) + log q(x|z) qt(x|z) + log q(z) qt(z)\n) dzdx\n= ∫ q(z) (∫ q(x|z) ( log qt(x)\np(x) + log q(x|z) qt(x|z)\n) dx+ log q(z)\nqt(z)\n) dz\n= ∫ q(z) ∫ q(x|z) log qt(x)\np(x) dxdz +\n∫ q(z) ∫ q(x|z) log q(x|z)\nqt(x|z) dxdz +\n∫ q(z) log q(z)\nqt(z) dz\n= ∫∫ q(x|z)q(z) log qt(x)\np(x) dzdx+ Eq(z) [KL (q(x|z)||qt(x|z))] + KL (q(z)||qt(z)) ,\nwhich concludes the derivation of upper bound of latent variable models.\nB.1 DERIVATIONS CONDITIONAL UPPER BOUND\nBy introducing an auxiliary distribution q̃(z|x, y) the upper bound to the expected KL for conditional latent variable models q(x|y) = ∫ q(x|z, y)q(z|y)dz can be derived by\nEp(y) [KL (q(x|y)||p(x|y))] = ∫∫\np(y)q(x|y) log q(x|y) p(x|y)dxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y) ( log\nq(x|z, y)q(z|y) p(x|y) − log q(z|x, y)\n) dzdxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y)\n· ( log\nq(x|z, y)q(z|y) p(x|y) − log q(z|x, y) + log q̃(z|x, y)− log q̃(z|x, y)\n) dzdxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y) ( log\nq(x|z, y)q(z|y) p(x|y) − log q̃(z|x, y)\n) dzdxdy\n− ∫ p(y) ∫∫ q(x|z, y)q(z, y) (log q(z|x, y)− log q̃(z|x, y)) dzdxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y) ( log\nq(x|z, y)q(z|y) p(x|y) − log q̃(z|x, y)\n) dzdxdy\n− ∫∫ p(y)q(x|y) ∫ q(z|x, y) log q(z|x, y)\nq̃(z|x, y)dzdxdy\n=U(q, q̃, p)− Ep(y),q(x|y) [KL (q(z|x, y)||q̃(z|x, y))] .\nDuring the E-step the bound is tightened by setting q̃(z|x, y) = qt(x|z, y)qt(z|y)/qt(x|y). U(q, q̃, p)\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y) ( log\nq(x|z, y)q(z|y) p(x|y) − log qt(x|z, y)qt(z|y) qt(x|y)\n) dzdxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y)\n· (log q(x|z, y) + log q(z|y)− log p(x|y)− log qt(x|z, y)− log qt(z|y) + log qt(x|y)) dzdxdy\n= ∫ p(y) ∫∫ q(x|z, y)q(z|y) ( log\nqt(x|y) p(x|y) + log q(x|z, y) qt(x|z, y) + log q(z|y) qt(z|y)\n) dzdxdy\n= ∫ p(y) ∫ q(z|y) (∫ q(x|z, y) ( log\nqt(x|y) p(x|y) + log q(x|z, y) qt(x|z, y)\n) dx+ log\nq(z|y) qt(z|y)\n) dzdy\n= ∫ p(y) ∫ q(z|y) ∫ q(x|z, y) log qt(x|y)\np(x|y) dxdzdy\n+ ∫ p(y) ∫ q(z|y) ∫ q(x|z, y) log q(x|z, y)\nqt(x|z, y) dxdzdy +\n∫ p(y) ∫ q(z|y) log q(z|y)\nqt(z|y) dzdy\n= ∫∫∫ p(y)q(z|y)q(x|z, y) log qt(x|y)\np(x|y) dxdzdy\n+ Ep(y),q(z|y) [KL (q(x|z, y)||qt(x|z, y))] + Ep(y) [KL (q(z|y)||qt(z|y))] , which concludes the derivation of the upper bound for conditional latent variable models.\nB.2 USING MORE FOR CLOSED FORM UPDATES FOR GMMS\nThe MORE algorithm, as introduced by Abdolmaleki et al. (2015), can be used to solve optimization problems of the following form\nargmaxq(x)Eq(x)[f(x)] s.t. KL (q(x)||qold(x)) ≤ for an exponential family distribution q(x), some function f(x), and an upper bound on the allowed change, . Abdolmaleki et al. (2015) show that the optimal solution is given by\nq(x) ∝ qold(x) exp ( f(x)\nη\n) = exp ( η log qold(x) + f(x)\nη\n) ,\nwhere η denotes the Lagrangian multiplier corresponding to the KL constraint. In order to obtain this Lagrangian multiplier, the following, convex, dual function has to be minimized\ng(η) = η + η log ∫ exp ( η log qold(x) + f(x)\nη\n) dx. (7)\nFor discrete distributions, such as the categorical distribution used to represent the coefficients of a GMM, we can directly work with those equations. For continuous distributions, Abdolmaleki et al. (2015) propose approximating f(x) with a local surrogate. The features to fit this surrogate are chosen such that they are compatible (Kakade, 2002), i.e., of the same form as the distributions sufficient statistics. For multivariate Gaussians, the sufficient statistics are squared features and thus the surrogate compatible to such a Gaussian distribution is given by\nf̂(x) = −1 2 xT F̂x + f̂ T x + f0.\nThe parameters of this surrogate can now be used to update the natural parameters of the Gaussian, i.e, the precision matrix Q = Σ−1 and q = Σ−1µ by\nQ = Qt + 1\nη F̂ and q = qt +\n1 η f̂ .\nIn order to apply the MORE algorithm to solve the optimization problems stated in Equation 3 and Equation 4 we make two trivial modifications. First, we invert the signs in Equation 3 and Equation 4, as we are now maximizing. Second, to account for the additional KL term in our objectives, we add 1 to η, everywhere except the first term of the sum in Equation 7." }, { "heading": "C ELABORATION ON RELATED WORK", "text": "C.1 RELATION BETWEEN EIM AND EM\nRecall that the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) maximizes the log-likelihood of the data by iteratively maximizing and tightening the following lower bound\nEp(x) [log q(x)] = Ep(x) [∫\nq̃(z|x) log q(x, z) q̃(z|x) dz\n] + Ep(x) [∫ q̃(z|x) log q̃(z|x) q(z|x)dz ]\n= L(q, q̃)︸ ︷︷ ︸ lower bound +Ep(x) [KL (q̃(z|x)||q(z|x))]︸ ︷︷ ︸ ≥0 .\nIt is instructive to compare our upper bound (Equation 1) to this lower bound. As mentioned, maximizing the likelihood is equivalent to minimizing the M-projection, i.e., argminq(x)KL (p(x)||q(x)), where, in relation to our objective, the model and true distribution have switched places in the nonsymmetric KL objective. Like our approach, EM introduces an auxiliary distribution q̃(z|x) and bounds the objective from below by subtracting the KL between auxiliary distribution and model, i.e., KL (q̃(z|x)||q(z|x)). In contrast, we obtain our upper bound by adding KL (q(z|x)||q̃(z|x)) to the objective. Again, the distributions have exchanged places within the KL.\nC.2 EQUALITY OF f -GAN AND b-GAN\nAs pointed out in section 3 both the f -GAN (Nowozin et al., 2016) and the b-GAN (Uehara et al., 2016) yield the same objective for the I-projection.\nWe start with the f -GAN. Nowozin et al. (2016) propose the following adversarial objective, based on a variatonal bound for f -divergences (Nguyen et al., 2010)\nargminq(x)argmaxV (x)F (q(x), V (x)) = Ep(x) [g(V (x))]− Eq(x) [f∗(g(V (x)))] . Here V (x) denotes a neural network with linear output, g(v) the output activation and f∗(t) the Fenchel conjugate (Hiriart-Urruty & Lemaréchal, 2012) of f(u), i.e., the generator function of the f -divergence. For the I-projection f(u) = − log u and f∗(t) = −1 − log(t). In theory, the only restriction posed on the choice of g(v) is that it outputs only values within the domain of f∗(t) Nowozin et al. (2016) suggest g’s for various f− divergences and chose exclusively monotony increasing functions which output large values for samples that are believed to be from the data distribution. For the I-projection they suggest g(v) = −exp(−v). Thus the f -GAN objective for the I-projection is given by\nargminq(x)argmaxV (x)F (q(x), V (x)) = −Ep(x) [exp(−V (x)] + Eq(x) [1− V (x)] .\nThe b-GAN objective follows from the density ratio estimation framework given by Sugiyama et al. (2012) and is given by\nargminq(x)argmaxr(x)Ep(x) [f ′(r(x))]− Eq(x) [f ′(r(x))r(x)− f(r(x))]\nHere f ′(u) denotes the derivative of f(u) and r(x) denotes an density ratio estimator. We need to enforce r(x) > 0 for all x to obtain a valid density ratio estimate. In practice this is usually done by learning rl(x) = log r(x) instead. Plugging rl(x), f(u) and f ′(u) = 1/u into the general b-GAN objective yields\nargminq(x)argmaxrl(x)F (q(x), rl(x)) = −Ep(x) [exp(−rl(x))] + Eq(x) [1− rl(x)] . Which is the same objective as the f -GAN uses. Yet, f -GAN and b-GAN objectives are not identical for arbitrary f -divergences.\nD VISUALIZATION OF SAMPLES\nD.1 PEDESTRIAN AND TRAFFIC PREDICTION\nSamples from the Stanford Drone Dataset can be found in Figure 7" }, { "heading": "E HYPERPARAMETERS", "text": "In all experiments, we realize the density ratio estimator as fully connected neural networks which we train using Adam (Kingma & Ba, 2014) and early stopping using a validation set.\nComparison to Generative Adversarial Approaches and Ablation Study\n• Data: 10, 000 Train Samples, 5, 000 Test Samples, 5, 000 Validation samples (for early stopping the density ratio estimator)\n• Density Ratio Estimator (EIM) / Variational function V (x) (f -GAN): 3 fully connected layers, 50 neurons each, trained with L2 regularization with factor 0.001, early stopping and batch size 1, 000\n• Updates EIM: MORE-like updates with = 0.05 for components and coefficients, 1, 000 samples per component and update\n• Updates FGAN: Iterate single update steps for generator and discriminator using learning rates of 1e− 3 and batch size of 1, 000.\nLine Reaching with Planar Robot\n• Data: 10, 000 train samples, 5, 000 test samples, 5, 000 validation samples (for early stopping the density ratio estimator)\n• Density Ratio Estimation: 2 fully connected layers of width 100, early stopping and batch size 1, 000\n• Updates: MORE-like updates with = 0.005 for components and coefficients, 1, 000 samples per component and update\nPedestrian and Traffic Prediction\n• Data SDD: 7, 500 train samples, 3, 801 test samples, 3, 801 validation samples (for early stopping the density ratio estimator)\n• Data NGS: 10, 000 train samples, 5, 000 test samples, 5, 000 validation samples (for early stopping the density ratio estimator)\n• Density Ratio Estimation: 3 fully connected layers of width 256, trained with L2 regularization with factor 0.0005 early stopping and batch size 1, 000.\n• Updates: MORE-like updates with = 0.01 for components and coefficients, 1, 000 samples per component and update\nObstacle Avoidance\n• Data: 1, 000 train contexts with 10 samples each, 500 test contexts with 10 samples each, 500 validation contexts with 10 samples each (for early stopping the density ratio estimator).\n• Density Ratio Estimation: 3 fully connected layers of width 256, trained with L2 regularization with factor 0.0005 early stopping and batch size 1, 000.\n• Component and Gating networks: 2 fully connected layer of width 64 for each component and the gating. Trained with Adam (α = 1e− 3, β0 = 0.5) for 10 epochs in each iteration." } ]
2,020
null
SP:311d2ebcdc0f71789d6c46d23451657519495119
[ "The paper theoretically investigates the role of “local optima” of the variational objective in ignoring latent variables (leading to posterior collapse) in variational autoencoders. The paper first discusses various potential causes for posterior collapse before diving deeper into a particular cause: local optima. The paper considers a class of near-affine decoders and characterise the relationship between the variance (gamma) in the likelihood and local optima. The paper then extends this discussion for deeper architecture and vanilla autoencoders and illustrate how this can arise when the reconstruction cost is high. The paper considers several experiments to illustrate this issue.", "This paper is clearly written and well structured. After categorizing difference causes of posterior collapse, the authors present a theoretical analysis of one such cause extending beyond the linear case covered in existing work. The authors then extended further to the deep VAE setting and showed that issues with the VAE may be accounted for by issues in the network architecture itself which would present when training an autoencoder." ]
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.
[]
[ { "authors": [ "A. Alemi", "B. Poole", "I. Fischer", "J. Dillon", "R. Saurous", "K. Murphy" ], "title": "Fixing a broken ELBO", "venue": "arXiv preprint arXiv:1711.00464,", "year": 2017 }, { "authors": [ "M. Bauer", "A. Mnih" ], "title": "Resampled priors for variational autoencoders", "venue": "arXiv preprint arXiv:1810.11428,", "year": 2018 }, { "authors": [ "M. Bauer", "A. Mnih" ], "title": "Resampled priors for variational autoencoders", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "S. Bowman", "L. Vilnis", "O. Vinyals", "A. Dai", "R. Jozefowicz", "S. Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Y. Burda", "R. Grosse", "R. Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "arXiv preprint arXiv:1509.00519,", "year": 2015 }, { "authors": [ "L. Cai", "H. Gao", "S. Ji" ], "title": "Multi-stage variational auto-encoders for coarse-to-fine image generation", "venue": "arXiv preprint arXiv:1705.07202,", "year": 2017 }, { "authors": [ "E. Candès", "X. Li", "Y. Ma", "J. Wright" ], "title": "Robust principal component analysis", "venue": "J. ACM,", "year": 2011 }, { "authors": [ "X. Chen", "D. Kingma", "T. Salimans", "Y. Duan", "P. Dhariwal", "J. Schulman", "I. Sutskever", "P. Abbeel" ], "title": "Variational lossy autoencoder", "venue": "arXiv preprint arXiv:1611.02731,", "year": 2016 }, { "authors": [ "B. Dai", "D. Wipf" ], "title": "Diagnosing and enhancing VAE models", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "B. Dai", "Y. Wang", "J. Aston", "G. Hua", "D. Wipf" ], "title": "Hidden talents of the variational autoencoder", "venue": "arXiv preprint arXiv:1706.05148,", "year": 2019 }, { "authors": [ "A. Dieng", "Y. Kim", "A. Rush", "D. Blei" ], "title": "Avoiding latent variable collapse with generative skip models", "venue": "arXiv preprint arXiv:1807.04863,", "year": 2018 }, { "authors": [ "K. Gregor", "Y. LeCun" ], "title": "Learning fast approximations of sparse coding", "venue": "International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "J. He", "D. Spokoyny", "G. Neubig", "T. Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "β-vae: Learning basic visual concepts with a constrained variational framework", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "C. Huang", "S. Tan", "A. Lacoste", "A. Courville" ], "title": "Improving explorability in variational inference with annealed variational objectives", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "K. Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "D. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "P. Li", "P.M. Nguyen" ], "title": "On random deep weight-tied autoencoders: Exact asymptotic analysis, phase transitions, and implications to training", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "J. Lucas", "G. Tucker", "R. Grosse", "M. Norouzi" ], "title": "Understanding posterior collapse in generative latent variable models", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "L. Maaløe", "M. Fraccaro", "V. Liévin", "O. Winther" ], "title": "BIVA: A very deep hierarchy of latent variables for generative modeling", "venue": null, "year": 1902 }, { "authors": [ "P.A. Mattei", "J. Frellsen" ], "title": "Leveraging the exact likelihood of deep latent variable models", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "E. Orjebin" ], "title": "A recursive formula for the moments of a truncated univariate normal distribution", "venue": null, "year": 2014 }, { "authors": [ "A. Razavi", "A. Oord", "B. Poole", "O. Vinyals" ], "title": "Preventing posterior collapse with δ-VAEs", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "D. Rezende", "S. Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "D. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "M. Rolinek", "D. Zietlow", "G. Martius" ], "title": "Variational autoencoders pursue PCA directions (by accident)", "venue": null, "year": 2019 }, { "authors": [ "C. Sønderby", "T. Raiko", "L. Maaløe", "S. Sønderby", "O. Winther" ], "title": "How to train deep variational autoencoders and probabilistic ladder networks", "venue": "arXiv preprint arXiv:1602.02282,", "year": 2016 }, { "authors": [ "P. Sprechmann", "A.M. Bronstein", "G. Sapiro" ], "title": "Learning efficient sparse and low rank models", "venue": "IEEE Trans. Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "M. Tipping", "C. Bishop" ], "title": "Probabilistic principal component analysis", "venue": "J. Royal Statistical Society, Series B,", "year": 1999 }, { "authors": [ "J. Tomczak", "M. Welling" ], "title": "VAE with a VampPrior", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "A. Van den Oord", "N. Kalchbrenner", "L. Espeholt", "O. Vinyals", "A. Graves", "K. Kavukcuoglu" ], "title": "Conditional image generation with PixelCNN decoders", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "S. Yeung", "A. Kannan", "Y. Dauphin", "L. Fei-Fei" ], "title": "Tackling over-pruning in variational autoencoders", "venue": "arXiv preprint arXiv:1706.03643,", "year": 2017 }, { "authors": [ "C. Yun", "S. Sra", "A. Jadbabaie" ], "title": "Small nonlinearities in activation functions create bad local minima in neural networks", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "P. Zhao", "B. Yu" ], "title": "On model selection consistency of Lasso", "venue": "Journal of Machine learning research,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "The variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) represents a powerful generative model of data points that are assumed to possess some complex yet unknown latent structure. This assumption is instantiated via the marginalized distribution\npθ(x) = ∫ pθ(x|z)p(z)dz, (1)\nwhich forms the basis of prevailing VAE models. Here z ∈ Rκ is a collection of unobservable latent factors of variation that, when drawn from the prior p(z), are colloquially said to generate an observed data point x ∈ Rd through the conditional distribution pθ(x|z). The latter is controlled by parameters θ that can, at least conceptually speaking, be optimized by maximum likelihood over pθ(x) given available training examples.\nIn particular, assuming n training points X = [x(1), . . . ,x(n)], maximum likelihood estimation is tantamount to minimizing the negative log-likelihood expression 1n ∑ i− log [ pθ ( x(i) )] . Proceeding further, because the marginalization over z in (1) is often intractable, the VAE instead minimizes a convenient variational upper bound given by L(θ, φ) , 1 n\nn∑ i=1 { −Eqφ(z|x(i)) [ log pθ ( x(i)|z )] + KL [ qφ(z|x(i)||p(z) ]} ≥ 1n n∑ i=1 − log [ pθ ( x(i) )] ,\n(2) with equality iff qφ(z|x(i)) = pθ(z|x(i)) for all i. The additional parameters φ govern the shape of the variational distribution qφ(z|x) that is designed to approximate the true but often intractable latent posterior pθ(z|x). The VAE energy from (2) is composed of two terms, a data-fitting loss that borrows the basic structure of an autoencoder (AE), and a KL-divergence-based regularization factor. The former incentivizes assigning high probability to latent codes z that facilitate accurate reconstructions of each x(i). In fact, if qφ(z|x) is a Dirac delta function, this term is exactly equivalent to a deterministic AE with data reconstruction loss defined by− log pθ (x|z). Overall, it is because of this association that qφ(z|x) is generally referred to as the encoder distribution, while pθ (x|z) denotes the decoder\ndistribution. Additionally, the KL regularizer KL [qφ(z|x)||p(z)] pushes the encoder distribution towards the prior without violating the variational bound.\nFor continuous data, which will be our primary focus herein, it is typical to assume that\np(z) = N (z|0, I), pθ (x|z) = N (x|µx, γI), and qφ (z|x) = N (z|µz,Σz), (3)\nwhere γ > 0 is a scalar variance parameter, while the Gaussian moments µx ≡ µx (z; θ), µz ≡ µz (x;φ), and Σz ≡ diag[σz (x;φ)]2 are computed via feedforward neural network layers. The encoder network parameterized by φ takes x as an input and outputs µz and Σz . Similarly the decoder network parameterized by θ converts a latent code z into µx. Given these assumptions, the generic VAE objective from (2) can be refined to L(θ, φ) = 1n n∑ i=1 { Eqφ(z|x(i)) [ 1 γ ‖x (i) − µx (z; θ) ‖22 ] (4)\n+ d log γ + ∥∥∥σz (x(i);φ)∥∥∥2\n2 − log ∣∣∣∣diag [σz (x(i);φ)]2∣∣∣∣+ ∥∥∥µz (x(i);φ)∥∥∥2 2 } ,\nexcluding an inconsequential factor of 1/2. This expression can be optimized over using SGD and a simple reparameterization strategy (Kingma & Welling, 2014; Rezende et al., 2014) to produce parameter estimates {θ∗, φ∗}. Among other things, new samples approximating the training data can then be generated via the ancestral process znew ∼ N (z|0, I) and xnew ∼ pθ∗(x|znew). Although it has been argued that global minima of (4) may correspond with the optimal recovery of ground truth distributions in certain asymptotic settings (Dai & Wipf, 2019), it is well known that in practice, VAE models are at risk of converging to degenerate solutions where, for example, it may be that qφ (z|x) = p(z). This phenomena, commonly referred to as VAE posterior collapse (He et al., 2019; Razavi et al., 2019), has been acknowledged and analyzed from a variety of different perspectives as we detail in Section 2. That being said, we would argue that there remains lingering ambiguity regarding the different types and respective causes of posterior collapse. Consequently, Section 3 provides a useful taxonomy that will serve to contextualize our main technical contributions. These include the following:\n• Building upon existing analysis of affine VAE decoder models, in Section 4 we prove that even arbitrarily small nonlinear activations can introduce suboptimal local minima exhibiting posterior collapse.\n• We demonstrate in Section 5 that if the encoder/decoder networks are incapable of sufficiently reducing the VAE reconstruction errors, even in a deterministic setting with no KL-divergence regularizer, there will exist an implicit lower bound on the optimal value of γ. Moreover, we prove that if this γ is sufficiently large, the VAE will behave like an aggressive thresholding operator, enforcing exact posterior collapse, i.e., qφ (z|x) = p(z).\n• Based on these observations, we present experiments in Section 6 establishing that as network depth/capacity is increased, even for deterministic AE models with no regularization, reconstruction errors become worse. This bounds the effective VAE trade-off parameter γ such that posterior collapse is essentially inevitable. Collectively then, we provide convincing evidence that posterior collapse is, at least in certain settings, the fault of deep AE local minima, and need not be exclusively a consequence of usual suspects such as the KL-divergence term.\nWe conclude in Section 7 with practical take-home messages, and motivate the search for improved AE architectures and training regimes that might be leveraged by analogous VAE models." }, { "heading": "2 RECENT WORK AND THE USUAL SUSPECTS FOR INSTIGATING COLLAPSE", "text": "Posterior collapse under various guises is one of the most frequently addressed topics related to VAE performance. Depending on the context, arguably the most common and seemingly transparent suspect for causing collapse is the KL regularization factor that is obviously minimized by qφ(z|x) = p(z). This perception has inspired various countermeasures, including heuristic annealing of the KL penalty or KL warm-start (Bowman et al., 2015; Huang et al., 2018; Sønderby et al., 2016), tighter bounds on the log-likelihood (Burda et al., 2015; Rezende & Mohamed, 2015), more\ncomplex priors (Bauer & Mnih, 2018; Tomczak & Welling, 2018), modified decoder architectures (Cai et al., 2017; Dieng et al., 2018; Yeung et al., 2017), or efforts to explicitly disallow the prior from ever equaling the variational distribution (Razavi et al., 2019). Thus far though, most published results do not indicate success generating high-resolution images, and in the majority of cases, evaluations are limited to small images and/or relatively shallow networks. This suggests that there may be more nuance involved in pinpointing the causes and potential remedies of posterior collapse. One notable exception though is the BIVA model from (Maaløe et al., 2019), which employs a bidirectional hierarchy of latent variables, in part to combat posterior collapse. While improvements in NLL scores have been demonstrated with BIVA using relatively deep encoder/decoders, this model is significantly more complex and difficult to analyze.\nOn the analysis side, there have been various efforts to explicitly characterize posterior collapse in restricted settings. For example, Lucas et al. (2019) demonstrate that if γ is fixed to a sufficiently large value, then a VAE energy function with an affine decoder mean will have minima that overprune latent dimensions. A related linearized approximation to the VAE objective is analyzed in (Rolinek et al., 2019); however, collapsed latent dimensions are excluded and it remains somewhat unclear how the surrogate objective relates to the original. Posterior collapse has also been associated with data-dependent decoder covariance networks Σx(z; θ) 6= γI (Mattei & Frellsen, 2018), which allows for degenerate solutions assigning infinite density to a single data point and a diffuse, collapsed density everywhere else. Finally, from the perspective of training dynamics, (He et al., 2019) argue that a lagging inference network can also lead to posterior collapse." }, { "heading": "3 TAXONOMY OF POSTERIOR COLLAPSE", "text": "Although there is now a vast literature on the various potential causes of posterior collapse, there remains ambiguity as to exactly what this phenomena is referring to. In this regard, we believe that it is critical to differentiate five subtle yet quite distinct scenarios that could reasonably fall under the generic rubric of posterior collapse:\n(i) Latent dimensions of z that are not needed for providing good reconstructions of the training data are set to the prior, meaning qφ(zj |x) ≈ p(zj) = N (0, 1) at any superfluous dimension j. Along other dimensions σ2z will be near zero and µz will provide a usable predictive signal leading to accurate reconstructions of the training data. This case can actually be viewed as a desirable form of selective posterior collapse that, as argued in (Dai & Wipf, 2019), is a necessary (albeit not sufficient) condition for generating good samples.\n(ii) The decoder variance γ is not learned but fixed to a large value1 such that the KL term from (2) is overly dominant, forcing most or all dimensions of z to follow the prior N (0, 1). In this scenario, the actual global optimum of the VAE energy (conditioned on γ being fixed) will lead to deleterious posterior collapse and the model reconstructions of the training data will be poor. In fact, even the original marginal log-likelihood can potentially default to a trivial/useless solution if γ is fixed too large, assigning a small marginal likelihood to the training data, provably so in the affine case (Lucas et al., 2019).\n(iii) As mentioned previously, if the Gaussian decoder covariance is learned as a separate network structure (instead of simply Σx(z; θ) = γI), there can exist degenerate solutions that assign infinite density to a single data point and a diffuse, isotropic Gaussian elsewhere (Mattei & Frellsen, 2018). This implies that (4) can be unbounded from below at what amounts to a posterior collapsed solution and bad reconstructions almost everywhere.\n(iv) When powerful non-Gaussian decoders are used, and in particular those that can parameterize complex distributions regardless of the value of z (e.g., PixelCNN-based (Van den Oord et al., 2016)), it is possible for the VAE to assign high-probability to the training data even if qφ(z|x) = p(z) (Alemi et al., 2017; Bowman et al., 2015; Chen et al., 2016). This category of posterior collapse is quite distinct from categories (ii) and (iii) above in that, although the reconstructions are similarly poor, the associated NLL scores can still be good.\n(v) The previous four categories of posterior collapse can all be directly associated with emergent properties of the VAE global minimum under various modeling conditions. In contrast, a fifth type of collapse exists that is the explicit progeny of bad VAE local minima. More\n1Or equivalently, a KL scaling parameter such as used by the β-VAE (Higgins et al., 2017) is set too large.\nspecifically, as we will argue shortly, when deeper encoder/decoder networks are used, the risk of converging to bad, overregularized solutions increases.\nThe remainder of this paper will primarily focus on category (v), with brief mention of the other types for comparison purposes where appropriate. Our rationale for this selection bias is that, unlike the others, category (i) collapse is actually advantageous and hence need not be mitigated. In contrast, while category (ii) is undesirable, it be can be avoided by learning γ. As for category (iii), this represents an unavoidable consequence of models with flexible decoder covariances capable of detecting outliers (Dai et al., 2019). In fact, even simpler inlier/outlier decomposition models such as robust PCA are inevitably at risk for this phenomena (Candès et al., 2011). Regardless, when Σz(x; θ) = γI this problem goes away. And finally, we do not address category (iv) in depth simply because it is unrelated to the canonical Gaussian VAE models of continuous data that we have chosen to examine herein. Regardless, it is still worthwhile to explicitly differentiate these five types and bare them in mind when considering attempts to both explain and improve VAE models." }, { "heading": "4 INSIGHTS FROM SIMPLIFIED CASES", "text": "Because different categories of posterior collapse can be impacted by different global/local minima structures, a useful starting point is a restricted setting whereby we can comprehensively characterize all such minima. For this purpose, we first consider a VAE model with the decoder network set to an affine function. As is often assumed in practice, we choose Σx = γI , where γ > 0 is a scalar parameter within the parameter set θ. In contrast, for the mean function we choose µx = W xz+bx for some weight matrix W x and bias vector bx. The encoder can be arbitrarily complex (although the optimal structure can be shown to be affine as well).\nGiven these simplifications, and assuming the training data has r ≥ κ nonzero singular values, it has been demonstrated that at any global optima, the columns of W x will correspond with the first κ principal components of X provided that we simultaneously learn γ or set it to the optimal value (which is available in closed form) (Dai et al., 2019; Lucas et al., 2019; Tipping & Bishop, 1999). Additionally, it has also be shown that no spurious, suboptimal local minima will exist. Note also that if r < κ the same basic conclusions still apply; however,W x will only have r nonzero columns, each corresponding with a different principal component of the data. The unused latent dimensions will satisfy qφ(z|x) = N (0, I), which represents the canonical form of the benign category (i) posterior collapse. Collectively, these results imply that if we converge to any local minima of the VAE energy, we will obtain the best possible linear approximation to the data using a minimal number of latent dimensions, and malignant posterior collapse is not an issue, i.e., categories (ii)-(v) will not arise.\nEven so, if instead of learning γ, we choose a fixed value that is larger than any of the significant singular values ofXX>, then category (ii) posterior collapse can be inadvertently introduced. More specifically, let r̃γ denote the number of such singular values that are smaller than some fixed γ value. Then along κ− r̃γ latent dimensions qφ(z|x) = N (0, I), and the corresponding columns of W x will be set to zero at the global optima (conditioned on this fixed γ), regardless of whether or not these dimensions are necessary for accurately reconstructing the data. And it has been argued that the risk of this type of posterior collapse at a conditionally-optimal global minimum will likely be inherited by deeper models as well (Lucas et al., 2019), although learning γ can ameliorate this problem.\nOf course when we move to more complex architectures, the risk of bad local minima or other suboptimal stationary points becomes a new potential concern, and it is not clear that the affine case described above contributes to reliable, predictive intuitions. To illustrate this point, we will now demonstrate that the introduction of an arbitrarily small nonlinearity can nonetheless produce a pernicious local minimum that exhibits category (v) posterior collapse. For this purpose, we assume the decoder mean function\nµx = πα (W xz) + bx, with πα(u) , sign(u) (|u| − α)+ , α ≥ 0. (5) The function πα is nothing more than a soft-threshold operator as is commonly used in neural network architectures designed to reflect unfolded iterative algorithms for representation learning (Gregor & LeCun, 2010; Sprechmann et al., 2015). In the present context though, we choose this nonlinearity largely because it allows (5) to reflect arbitrarily small perturbations away from a strictly\naffine model, and indeed if α = 0 the exact affine model is recovered. Collectively, these specifications lead to the parameterization θ = {W x, bx, γ} and φ = {µ(i)z ,σ(i)z }ni=1 and energy (excluding irrelevant scale factors and constants) given by\nL(θ, φ) = n∑ i=1 { Eqφ(z|x(i)) [ 1 γ ∥∥∥x(i) − πα (W xz)− bx∥∥∥2 2 ] (6)\n+d log γ + ∥∥∥σ(i)z ∥∥∥2\n2 − log ∣∣∣∣diag [σ(i)z ]2∣∣∣∣+ ∥∥∥µ(i)z ∥∥∥2 2 } ,\nwhere µ(i)z and σ (i) z denote arbitrary encoder moments for data point i (this is consistent with the assumption of an arbitrarily complex encoder as used in previous analysis of affine decoder models). Now define γ̄ , 1nd ∑ i ‖x(i) − x̄‖22, with x̄ , 1 n ∑ i x (i). We then have the following result:\nProposition 4.1 For any α > 0, there will always exist data sets X such that (6) has a global minimum that perfectly reconstructs the training data, but also a bad local minimum characterized by\nqφ(z|x) = N (z|0, I) and pθ(x) = N (x|x̄, γ̄I). (7)\nHence the moment we allow for nonlinear (or more precisely, non-affine) decoders there can exist a poor local minimum, across all parameters including a learnable γ, that exhibits category (v) posterior collapse.2 In other words, no predictive information about x passes through the latent space, and a useless/non-informative distribution pθ(x) emerges that is incapable of assigning high probability to the data (except obviously in the trivial degenerate case where all the data points are equal to the empirical mean x̄). We will next investigate the degree to which such concerns can influence behavior in arbitrarily deep architectures." }, { "heading": "5 EXTRAPOLATING TO PRACTICAL DEEP ARCHITECTURES", "text": "Previously we have demonstrated the possibility of local minima aligned with category (v) posterior collapse the moment we allow for decoders that deviate ever so slightly from an affine model. But nuanced counterexamples designed for proving technical results notwithstanding, it is reasonable to examine what realistic factors are largely responsible for leading optimization trajectories towards such potential bad local solutions. For example, is it merely the strength of the KL regularization term, and if so, why can we not just use KL warm-start to navigate around such points? In this section we will elucidate a deceptively simple, alternative risk factor that will be corroborated empirically in Section 6.\nFrom the outset, we should mention that with deep encoder/decoder architectures commonly used in practice, a stationary point can more-or-less always exist at solutions exhibiting posterior collapse. As a representative and ubiquitous example, please see Appendix A.4. But of course without further details, this type of stationary point could conceivably manifest as a saddle point (stable or unstable), a local maximum, or a local minimum. For the strictly affine decoder model mentioned in Section 4, there will only be a harmless unstable saddle point at any collapsed solution (the Hessian has negative eigenvalues). In contrast, for the special nonlinear case elucidated via Proposition 4.1 we can instead have a bad local minima. We will now argue that as the depth of common feedforward architectures increases, the risk of converging to category (v)-like solutions with most or all latent dimensions stuck at bad stationary points can also increase.\nSomewhat orthogonal to existing explanations of posterior collapse, our basis for this argument is not directly related to the VAE KL-divergence term. Instead, we consider a deceptively simple yet potentially influential alternative: Unregularized, deterministic AE models can have bad local solutions with high reconstruction errors when sufficiently deep. This in turn can directly translate to category (v) posterior collapse when training a corresponding VAE model with a matching deep architecture. Moreover, to the extent that this is true, KL warm-start or related countermeasures\n2This result mirrors related efforts examining linear DNNs, where it has been previously demonstrated that under certain conditions, all local minima are globally optimal (Kawaguchi, 2016), while small nonlinearities can induce bad local optima (Yun et al., 2019). However, the loss surface of these models is completely different from a VAE, and hence we view Proposition 4.1 as a complementary result.\nwill likely be ineffective in avoiding such suboptimal minima. We will next examine these claims in greater depth followed by a discussion of practical implications." }, { "heading": "5.1 FROM DEEPER ARCHITECTURES TO INEVITABLE POSTERIOR COLLAPSE", "text": "Consider the deterministic AE model formed by composing the encoder mean µx ≡ µx (·; θ) and decoder mean µz ≡ µz (·;φ) networks from a VAE model, i.e., reconstructions x̂ are computed via x̂ = µx [µz (x;φ) ; θ]. We then train this AE to minimize the squared-error loss 1 nd ∑n i=1 ∥∥∥x(i) − x̂(i)∥∥∥2 2 , producing parameters {θae, φae}. Analogously, the corresponding VAE trained to minimize (4) arrives at a parameter set denoted {θvae, φvae}. In this scenario, it will typically follow that\n1 nd n∑ i=1 ∥∥∥x(i) − µx [µz (x(i);φae) ; θae]∥∥∥2 2 ≤ 1nd n∑ i=1 Eqφvae(z|x(i)) [ ‖x(i) − µx (z; θvae) ‖22 ] ,\n(8) meaning that the deterministic AE reconstruction error will generally be smaller than the stochastic VAE version. Note that if σ2z → 0, the VAE defaults to the same deterministic encoder as the AE and hence will have identical representational capacity; however, the KL regularization prevents this from happening, and any σ2z > 0 can only make the reconstructions worse.\n3 Likewise, the KL penalty factor ‖µ2z‖22 can further restrict the effective capacity and increase the reconstruction error of the training data. Beyond these intuitive arguments, we have never empirically found a case where (8) does not hold (see Section 6 for examples).\nWe next define the set Sε , { θ, φ : 1nd n∑ i=1 ∥∥∥x(i) − x̂(i)∥∥∥2 2 ≤ ε } (9)\nfor any > 0. Now suppose that the chosen encoder/decoder architecture is such that with high probability, achievable optimization trajectories (e.g., via SGD or related) lead to parameters {θae, φae} /∈ Sε, i.e., Prob ({θae, φae} ∈ Sε) ≈ 0. It then follows that the optimal VAE noise variance denoted γ∗, when conditioned on practically-achievable values for other network parameters, will satisfy\nγ∗ = 1nd n∑ i=1 Eqφvae(z|x(i)) [ ‖x(i) − µx (z; θvae) ‖22 ] ≥ ε. (10)\nThe equality in (10) can be confirmed by simply differentiating the VAE cost w.r.t. γ and equating to zero, while the inequality comes from (8) and the fact that {θae, φae} /∈ Sε. From inspection of the VAE energy from (4), it is readily apparent that larger values of γ will discount the data-fitting term and therefore place greater emphasis on the KL divergence. Since the latter is minimized when the latent posterior equals the prior, we might expect that whenever ε and therefore γ∗ is increased per (10), we are at a greater risk of nearing collapsed solutions. But the nature of this approach is not at all transparent, and yet this subtlety has important implications for understanding the VAE loss surface in regions at risk of posterior collapse.\nFor example, one plausible hypothesis is that only as γ∗ →∞ do we risk full category (v) collapse. If this were the case, we might have less cause for alarm since the reconstruction error and by association γ∗ will typically be bounded from above at any local minimizer. However, we will now demonstrate that even finite values can exactly collapse the posterior. In formally showing this, it is helpful to introduce a slightly narrower but nonetheless representative class of VAE models.\nSpecifically, let f ( µz,σz, θ,x (i) ) , Eqφ(z|x(i)) [ ‖x(i) − µx (z; θ) ‖22 ] , i.e., the VAE data term evaluated at a single data point without the 1/γ scale factor. We then define a wellbehaved VAE as a model with energy function (4) designed such that ∇µzf ( µz,σz, θ,x (i) )\nand ∇σzf ( µz,σz, θ,x (i) ) are Lipschitz continuous gradients for all i. Furthermore, we specify a non-\ndegenerate decoder as any µx(z; θ = θ̃) with θ set to a θ̃ value such that∇σzf ( µz,σz, θ̃,x (i) ) ≥\n3Except potentially in certain contrived adversarial conditions that do not represent practical regimes.\nc for some constant c > 0 that can be arbitrarily small. This ensures that f is an increasing function of σz , a quite natural stipulation given that increasing the encoder variance will generally only serve to corrupt the reconstruction, unless of course the decoder is completely blocking the signal from the encoder. In the latter degenerate situation, it would follow that ∇µzf ( µz,σz, θ,x (i) ) =\n∇σzf ( µz,σz, θ,x (i) ) = 0, which is more-or-less tantamount to category (v) posterior collapse.\nBased on these definitions, we can now present the following:\nProposition 5.1 For any well-behaved VAE with arbitrary, non-degenerate decoder µx(z; θ = θ̃), there will always exist a γ′ < ∞ such that the trivial solution µx(z; θ 6= θ̃) = x̄ and qφ(z|x) = p(z) will have lower cost.\nAround any evaluation point, the sufficient condition we applied to demonstrate posterior collapse (see proof details) can also be achieved with some γ′′ < γ′ if we allow for partial collapse, i.e., qφ∗(zj |x) = p(zj) along some but not all latent dimensions j ∈ {1, . . . , κ}. Overall, the analysis loosely suggests that the number of dimensions vulnerable to exact collapse will increase monotonically with γ.\nProposition 5.1 also provides evidence that the VAE behaves like a strict thresholding operator, completely shutting off latent dimensions using a finite value for γ. This is analogous to the distinction between using the `1 versus `2 norm for solving regularized regression problems of the standard form minu ‖x−Au‖22 + γ η(u), whereA is a design matrix and η is a penalty function. When η is the `1 norm, some or all elements of u can be pruned to exactly zero with a sufficiently large but finite γ Zhao & Yu (2006). In contrast, when the `2 norm is applied, the coefficients will be shrunk to smaller values but never pushed all the way to zero unless γ →∞." }, { "heading": "5.2 PRACTICAL IMPLICATIONS", "text": "In aggregate then, if the AE base model displays unavoidably high reconstruction errors, this implicitly constrains the corresponding VAE model to have a large optimal γ value, which can potentially lead to undesirable posterior collapse per Proposition 5.1. In Section 6 we will demonstrate empirically that training unregularized AE models can become increasingly difficult and prone to bad local minima (or at least bad stable stationary points) as the depth increases; and this difficulty can persist even with counter-measures such as skip connections. Therefore, from this vantage point we would argue that it is the AE base architecture that is effectively the guilty party when it comes to category (v) posterior collapse.\nThe perspective described above also helps to explain why heuristics like KL warm-start are not always useful for improving VAE performance. With the standard Gaussian model (4) considered herein, KL warm-start amounts to adopting a pre-defined schedule for incrementally increasing γ starting from a small initial value, the motivation being that a small γ will steer optimization trajectories away from overregularized solutions and posterior collapse.\nHowever, regardless of how arbitrarily small γ may be fixed at any point during this process, the VAE reconstructions are not likely to be better than the analogous deterministic AE (which is roughly equivalent to forcing γ = 0 within the present context). This implies that there can exist an implicit γ∗ as computed by (10) that can be significantly larger such that, even if KL warm-start is used, the optimization trajectory may well lead to a collapsed posterior stationary point that has this γ∗ as the optimal value in terms of minimizing the VAE cost with other parameters fixed. Note that if full posterior collapse does occur, the gradient from the KL term will equal zero and hence, to be at a stationary point it must be that the data term gradient is also zero. In such situations, varying γ manually will not impact the gradient balance anyway." }, { "heading": "6 EMPIRICAL ASSESSMENTS", "text": "In this section we empirically demonstrate the existence of bad AE local minima with high reconstruction errors at increasing depth, as well as the association between these bad minima and imminent VAE posterior collapse. For this purpose, we first train fully connected AE and VAE models with 1, 2, 4, 6, 8 and 10 hidden layers on the Fashion-MNIST dataset (Xiao et al., 2017). Each hidden layer is 512-dimensional and followed by ReLU activations (see Appendix A.1 for further\ndetails). The reconstruction error is shown in Figure 1(left). As the depth of the network increases, the reconstruction error of the AE model first decreases because of the increased capacity. However, when the network becomes too deep, the error starts to increase, indicating convergence to a bad local minima (or at least stable stationary point/plateau) that is unrelated to KL-divergence regularization. The reconstruction error of a VAE model is always worse than that of the corresponding AE model as expected. Moreover, while KL warm-start/annealing can help to improve the VAE reconstructions to some extent, performance is still worse than the AE as expected.\nWe next train AE and VAE models using a more complex convolutional network on Cifar100 data (Krizhevsky & Hinton, 2009). At each spatial scale, we use 1 to 5 convolution layers followed by ReLU activations. We also apply 2× 2 max pooling to downsample the feature maps to a smaller spatial scale in the encoder and use a transposed convolution layer to upscale the feature map in the decoder. The reconstruction errors are shown in Figure 1(middle). Again, the trend is similar to the fully-connected network results. See Appendix A.1 for an additional ImageNet example.\nIt has been argued in the past that skip connections can increase the mutual information between observations x(i) and the inferred latent variables z (Dieng et al., 2018), reducing the risk of posterior collapse. And it is well-known that ResNet architectures based on skip connections can improve performance on numerous recognition tasks (He et al., 2016). To this end, we train a number of AE models using ResNet-inspired encoder/decoder architectures on multiple datasets including Cifar10, Cifar100, SVHN and CelebA. Similar to the convolution network structure from above, we use 1, 2, and 4 residual blocks within each spatial scale. Inside each block, we apply 2 to 5 convolution layers. For aggregate comparison purposes, we normalize the reconstruction error obtained on each dataset by dividing it with the corresponding error produced by the most shallow network structure (1 residual block with 2 convolution layers). We then average the normalized reconstruction errors over all four datasets. The average normalized errors are shown in Figure 1(right), where we observe that adding more convolution layers inside each residual block can increase the reconstruction error when the network is too deep. Moreover, adding more residual blocks can also lead to higher reconstruction errors. And empirical results obtained using different datasets and networks architectures, beyond the conditions of Figure 1, also show a general trend of increased reconstruction error once the effective depth is sufficiently deep.\nWe emphasize that in all these models, as the network complexity/depth increases, the simpler models are always contained within the capacity of the larger ones. Therefore, because the reconstruction error on the training data is becoming worse, it must be the case that the AE is becoming stuck at bad local minima or plateaus. Again since the AE reconstruction error serves as a probable lower bound for that of the VAE model, a deeper VAE model will likely suffer the same problem, only exacerbated by the KL-divergence term in the form of posterior collapse. This implies that there will be more σz values moving closer to 1 as the VAE model becomes deeper; similarly µz values will push towards 0. The corresponding dimensions will encode no information and become completely useless.\nTo help corroborate this association between bad AE local minima and VAE posterior collapse, we plot histograms of VAE σz values as network depth is varied in Figure 2. The models are trained on CelebA and the number of convolution layers in each spatial scale is 2, 4 and 5 from left to right. As the depth increases, the reconstruction error becomes larger and there are more σz near 1." }, { "heading": "7 DISCUSSION", "text": "In this work we have emphasized the previously-underappreciated role of bad local minima in trapping VAE models at posterior collapsed solutions. Unlike affine decoder models whereby all local minima are provably global, Proposition 4.1 stipulates that even infinitesimal nonlinear perturbations can introduce suboptimal local minima characterized by deleterious posterior collapse. Furthermore, we have demonstrated that the risk of converging to such a suboptimal minima increases with decoder depth. In particular, we outline the following practically-likely pathway to posterior collapse:\n1. Deeper AE architectures are essential for modeling high-fidelity images or similar, and yet counter-intuitively, increasing AE depth can actually produce larger reconstruction errors on the training data because of bad local minima (with or without skip connections). An analogous VAE model with the same architecture will likely produce even worse reconstructions because of the additional KL regularization term, which is not designed to steer optimization trajectories away from poor reconstructions.\n2. At any such bad local minima, the value of γ will necessarily be large, i.e., if it is not large, we cannot be at a local minimum.\n3. But because of the thresholding behavior of the VAE as quantified by Proposition 5.1, as γ becomes larger there is an increased risk of exact posterior collapse along excessive latent dimensions. And complete collapse along all dimensions will occur for some finite γ sufficiently large. Furthermore, explicitly forcing γ to be small does not fix this problem, since in some sense the implicit γ∗ is still large as discussed in Section 5.2.\nWhile we believe that this message is interesting in and of itself, there are nonetheless several practically-relevant implications. For example, complex hierarchical VAEs like BIVA notwithstanding, skip connections and KL warm-start have modest ability to steer optimization trajectories towards good solutions; however, this underappreciated limitation will not generally manifest until networks are sufficiently deep as we have considered. Fortunately, any advances or insights gleaned from developing deeper unregularized AEs, e.g., better AE architectures, training procedures, or initializations (Li & Nguyen, 2019), could likely be adapted to reduce the risk of posterior collapse in corresponding VAE models.\nIn closing, we should also mention that, although this work has focused on Gaussian VAE models, many of the insights translate into broader non-Gaussian regimes. For example, a variety of recent VAE enhancements involve replacing the fixed Gaussian latent-space prior p(z) with a parameterized non-Gaussian alternative (Bauer & Mnih, 2019; Tomczak & Welling, 2018). This type of modification provides greater flexibility in modeling the aggregated posterior in the latent space, which is useful for generating better samples (Makhzani et al., 2016). However, it does not immunize VAEs against the bad local minima introduced by deep decoders, and good reconstructions are required by models using Gaussian or non-Gaussian priors alike. Therefore, our analysis herein still applies in much the same way." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NETWORK STRUCTURE, EXPERIMENTAL SETTINGS, AND ADDITIONAL IMAGENET RESULTS", "text": "Three different kinds of network structures are used in the experiments: fully connected networks, convolution networks, and residual networks. For all these structures, we set the dimension of the latent variable z to 64. We then describe the network details accordingly.\nFully Connected Netowrk: This experiment is only applied on the simple Fashion-MNIST dataset, which contains 60000 28 × 28 black-and-while images. These images are first flattened to a 784 dimensional vector. Both the encoder and decoder have multiple number of 512-dimensional hidden layers, each followed by ReLU activations.\nConvolution Netowrk: The original images are either 32× 32× 3 (Cifar10, Cifar100 and SVHN) or 64 × 64 × 3 (CelebA and ImageNet). In the encoder, we use a multiple number (denoted as t) of 3 × 3 convolution layers for each spatial scale. Each convolution layer is followed by a ReLU activation. Then we use a 2 × 2 max pooling to downsample the feature map to a smaller spatial scale. The number of channels is doubled when the spatial scale is halved. We use 64 channels when the spatial scale is 32 × 32. When the spatial scale reaches 4 × 4 (there should be 512 channels in this feature map), we use an average pooling to transform the feature map to a vector, which is then transformed into the latent variable using a fully connected layer. In the decoder, the latent variable is first transformed to a 4096-dimensional vector using a fully connected layer and then reshaped to 2×2×1024. Again in each spatial scale, we use 1 transpose convolution layer to upscale the feature map and halve the number of channels followed by t− 1 convolution layers. Each convolution and transpose convolution layer is followed by a ReLU activation layer. When the spatial scale reaches that of the original image, we use a convolution layer to transofrm the feature map to 3 channels.\nResidual Network: The network structure of the residual network is similar to that of a convolution network described above. We simply replace the convolution layer with a residual block. Inside the residual block, we use different numbers of convolution numbers. (The typical number of convolution layers inside a residual block is 2 or 3. In our experiments, we try 2, 3, 4 and 5.)\nTraining Details: All the experiments with different network structures and datasets are trained in the same procedure. We use the Adam optimization method and the default optimizer hyper parameters in Tensorflow. The batch size is 64 and we train the model for 250K iterations. The initial learning rate is 0.0002 and it is halved every 100K iterations.\nAdditional Results on ImageNet: We also show the reconstruction error for convolution networks with increasing depth trained on ImageNet in Figure 3. The trend is the same as that in Figure 1." }, { "heading": "A.2 PROOF OF PROPOSITION 4.1", "text": "While the following analysis could in principle be extended to more complex datasets, for our purposes it is sufficient to consider the following simplified case for ease of exposition. Specifically, we assume that n > 1, d > κ, set d = 2, n = 2, κ = 1, and x(1) = (1, 1),x(2) = (−1,−1). Additionally, we will use the following basic facts about the Gaussian tail. Note that (12)-(13) below follow from integration by parts; see Orjebin (2014).\nLemma A.1 Let ∼ N (0, 1), A > 0; φ(x),Φ(x) be the pdf and cdf of the standard normal distribution, respectively. Then\n1− Φ(A) ≤ e−A 2/2, (11)\nE[ 1{ >A}] = φ(A), (12) E[ 21{ >A}] = 1− Φ(A) +Aφ(A). (13)" }, { "heading": "A.2.1 SUBOPTIMALITY OF (7)", "text": "Under the specificed conditions, the energy from (7) has a value of nd. Thus to show that it is not the global minimum, it suffices to show that the following VAE, parameterized by δ, has energy→ −∞ as δ → 0:\nµ(1)z = 1, µ (2) z = −1,\nW x = (α+ 1, α+ 1), bx = 0,\nσ(1)z = σ (2) z = δ,\nγ = EN (ε|0,1)2(1− πα((α+ 1)(1 + δε)))2. This follows because, given the stated parameters, we have that\nL(θ, φ) = 2∑ i=1 (1 + 2 logEN (ε|0,1)2(1− πα((α+ 1)(1 + δε)))2 − 2 log δ + δ2 + 1)\n= 2∑ i=1 (Θ(1) + 2 logEN (ε|0,1)(1− πα(α+ 1 + (α+ 1)δε))2 − 2 log δ)\n≤(i)4 log δ + Θ(1).\n(i) holds when δ < 1α+1 ; to see this, denote x := α+ 1 + (α+ 1)(δε). Then\nEN (ε|0,1)(1− πα(x))2\n=Eε[(1− πα(x))21{x≥α}] + Eε[(1− πα(x))21{|x|<α}] + Eε[(1− πα(x))21{x<−α}] ≤Eε[(1− (x− α))2]︸ ︷︷ ︸\n(a)\n+P(|x| < α)︸ ︷︷ ︸ (b) +Eε((1− x− α)21{x<−α})︸ ︷︷ ︸ (c) .\nIn the RHS above (a) = [(α+ 1)δ]2; using (11)-(13) we then have (b) < P(x < α) = P ( ε <\n−1 (α+ 1)δ\n) ≤ exp ( − 1\n2[(α+ 1)δ]2\n) .\n(c) < Eε((2α+ (α+ 1)δε)21{x<α})\n=\n∫ −1 (α+1)δ\n−∞ (2α+ (α+ 1)δε)2 1√ 2π e−ε 2/2dε\n<\n∫ −1 (α+1)δ\n−∞ (4α2 + [(α+ 1)δε]2) 1√ 2π e−ε 2/2dε\n< { 4α2 + ((α+ 1)δ)2 [ 1 +\n1√ 2π\n]} exp ( − 1\n2[(α+ 1)δ]2 ) when δ < 1α+1 . Thus\nlim δ→0\nEN (ε|0,1)(1− πα(x))2\n[(α+ 1)δ]2 = 1,\nand lim δ→0 {logEN (ε|0,1)(1− πα(x))2 − 2 log δ} = 2 log(α+ 1), or 2 logE (1− πα(x))2 = 4 log δ + Θ(1),\nand we can see (i) holds." }, { "heading": "A.2.2 LOCAL OPTIMALITY OF (7)", "text": "We will now show that at (7), the Hessian of the energy has structure\n(W x) (bx) (σ (i) z , µ (i) z ) (γ)\n(W x) 0 0 0 0 (bx) 0 2 γ I 0 0\n(σ (i) z , µ (i) z ) 0 0 (p.d.) 0\n(γ) 0 0 0 (p.d.)\nwhere p.d. means the corresponding submatrix is positive definite and independent of other parameters. While the Hessian is 0 in the subspace of W x, we can show that for VAEs that are only different from (7) byW x, the gradient always points back to (7). Thus (7) is a strict local minima.\nFirst we compute the Hessian matrix block-wise. We will identify W x ∈ R2×1 with the vector (Wj) 2 j=1, and use the shorthand notations x (i) = (x (i) j ) 2 j=1, bx = (bj) 2 j=1, z (i) = µ (i) z + σ (i) z ε, where ε ∼ N (0, 1) (recall that z(i) is a scalar in this proof).\n1. The second-order derivatives involvingW x can be expressed as\n∂L ∂Wj = −2 γ n∑ i=1 Eε[(π′α(Wjz(i))z(i)) · (x (i) j − πα(Wjz (i))− bj)], (14)\nand therefore all second-order derivatives involving Wj will have the form\nE [π′α(Wjz(i))F1 + π′′α(Wjz(i))F2], (15)\nwhere F1, F2 are some arbitrary functions that are finite at (7). Since π′α(0) = π ′′ α(0) = Wj = 0, the above always evaluates to 0 atW x = 0.\n2. For second-order derivatives involving bx, we have\n∂L ∂bx = −2 γ Eε[x(i) − πα(W xz(i))− bx]\nand\n∂2L ∂(bx)2 = 2 γ I,\n∂2L ∂γ∂bx = 2 γ2 ∂L ∂bx = 0, (sinceW x = 0);\nand ∂ 2L\n∂µ (i) z ∂bx\nand ∂ 2L\n∂µ (i) z ∂σ (i) z\nwill also have the form of (15), thus both equal 0 atW x = 0.\n3. Next consider second-order derivatives involving µ(i)z or σ (i) k . Since the KL part of the energy, ∑n i=1 KL(qφ(z|x(i))|p(z)), only depends on µ (i) z and σ (i) k , and have p.d. Hes-\nsian at (7) independent of other parameters, it suffices to calculate the derivatives of the reconstruction error part, denoted as Lrecon. Since\n∂Lrecon ∂µ (i) z = −2 γ ∑ i,j E [ (x (i) j − πα(Wjz (i))− bj)Wjπ′α(Wjz(i)) ] ,\n∂Lrecon ∂σ (i) z = −2 γ ∑ i,j E [ (x (i) j − πα(Wjz (i))− bj)Wj π′α(Wjz(i)) ] ,\nall second-order derivatives will have the form of (15), and equal 0 atW x = 0.\n4. For γ, we can calculate that ∂2L/∂γ2 = 4/γ2 > 0 at (7).\nNow, consider VAE parameters that are only different from (7) in W x. Plugging bx = x̄, µ (i) z = 0, σ (i) k = 1 into (14), we have\n∂L ∂Wj = −2 γ n∑ i=1 Eε[(π′α(Wjε)ε) · (−πα(Wjε))].\nAs (π′α(Wjε)ε) · (−πα(Wjε)) ≤ 0 always holds, we can see that the gradient points back to (7). This concludes our proof of (7) being a strict local minima." }, { "heading": "A.3 PROOF OF PROPOSITION 5.1", "text": "We begin by assuming an arbitrarily complex encoder for convenience. This allows us to remove the encoder-sponsored amortized inference and instead optimize independent parameters µ(i)z and σ (i) z separately for each data point. Later we will show that this capacity assumption can be dropped and the main result still holds.\nWe next define\nmz , [( µ(1)z )> , . . . , ( µ(n)z )>]> ∈ Rκn and sz , [( σ(1)z )> , . . . , ( σ(n)z )>]> ∈ Rκn, (16)\nwhich are nothing more than the concatenation of all of the decoder means and variances from each data point into the respective column vectors. It is also useful to decompose the assumed non-degenerate decoder parameters via\nθ ≡ [ψ,w] , ψ , θ\\w, (17) where w ∈ [0, 1] is a scalar such that µx (z; θ) ≡ µx (wz;ψ). Note that we can always reparameterize an existing deep architecture to extract such a latent scaling factor which we can then hypothetically optimize separately while holding the remaining parameters ψ fixed. Finally, with slight abuse of notation, we may then define the function\nf (wmz, wsz) , (18) n∑ i=1 f ( µ(i)z ,σ (i) z , [ψ̃, w],x (i) ) ≡ n∑ i=1 E N ( z|µ(i)z ,diag [ σ(i)z ]2) [‖x(i) − µx (wz; ψ̃) ‖22] . This is basically just the original function f summed over all training points, with ψ fixed at the corresponding values extracted from θ̃ while w serves as a free scaling parameter on the decoder.\nBased on the assumption of Lipschitz continuous gradients, we can always create the upper bound\nf (u,v) ≤ f (ũ, ṽ) (19) + (u− ũ)> ∇uf (u,v)|u=ũ + L 2 ‖u− ũ‖ 2 2 + (v − ṽ) > ∇vf (u,v)|v=ṽ + L 2 ‖v − ṽ‖ 2 2 ,\nwhere L is the Lipschitz constant of the gradients and we have adopted u , wmz and v , wσz to simplify notation. Equality occurs at the evaluation point {u,v} = {ũ, ṽ}. However, this bound does not account for the fact that we know ∇vf (u,v) ≥ 0 (i.e., f (u,v) is increasing w.r.t. v) and that v ≥ 0. Given these assumptions, we can produce the refined upper bound\nfub (u,v) ≥ f (u,v) , (20)\nwhere fub (u,v) ,\nf (ũ, ṽ) + (u− ũ)> ∇uf (u,v)|u=ũ + L 2 ‖u− ũ‖ 2 2+ nd∑ j=1 g ( vj , ṽj , ∇vjf (u,v) ∣∣ vj=ṽj ) (21)\nand the function g : R3 → R is defined as\ng (v, ṽ, δ) , (v − ṽ) δ + L2 (v − ṽ) 2 2 if v ≥ ṽ − δ L and {v, ṽ, δ} ≥ 0, −δ2 2L if v < ṽ − δ L and {v, ṽ, δ} ≥ 0,\n∞ otherwise.\n(22)\nGiven that\nṽ − δL = arg minv [ (v − ṽ) δ + L2 (v − ṽ) 2 2 ] and −δ 2 2L = minv [ (v − ṽ) δ + L2 (v − ṽ) 2 2 ] , (23)\nthe function g is basically just setting all values of (v − ṽ) δ + L2 ‖v − ṽ‖ 2 2 with negative slope to the minimum −δ 2\n2L . This change is possible while retaining an upper bound because f (u,v) is nondecreasing in v by stated assumption. Additionally, g is set to infinity for all v < 0 to enforce non-negatively.\nWhile it may be possible to proceed further using fub, we find it useful to consider a final modification. Specifically, we define the approximation\nfappr (u,v) ≈ fub (ũ, ṽ) , (24)\nwhere fappr (u,v) ,\nf (ũ, ṽ) + (u− ũ)> ∇uf (u,v)|u=ũ + L 2 ‖u− ũ‖ 2 2 + nd∑ j=1 gappr ( vj , ṽj , ∇vjf (u,v) ∣∣ vj=ṽj ) (25) and\ngappr (v, ṽ, δ) , −δ2 2L + δ2 2Lṽ2 v 2 if ṽ − δL ≥ 0 and {v, ṽ, δ} ≥ 0,( Lṽ2 2 − δṽ ) + ( δ ṽ − L 2 ) v2 if ṽ − δL < 0 and {v, ṽ, δ} ≥ 0,\n∞ otherwise.\n(26)\nWhile slightly cumbersome to write out, gappr has a simple interpretation. By construction, we have that\nmin v gappr (v, ṽ, δ) = gappr (0, ṽ, δ) = min v g (v, ṽ, δ) = g (0, ṽ, δ) (27)\nand gappr (ṽ, ṽ, δ) = g (ṽ, ṽ, δ) = 0. (28)\nAt other points, gappr is just a simple quadratic interpolation but without any factor that is linear in v. And removal of this linear term, while retaining (27) and (27) will be useful for the analysis that follows below. Note also that although fappr (u,v) is no longer a strict bound on f (u,v), it will nonetheless still be an upper bound whenever vj ∈ {0, ṽj} for all j which will ultimately be sufficient for our purposes.\nWe now consider optimizing the function\nhappr(mz, sz, w) , 1γ f appr (wmz, wsz) + n∑ i=1 ∥∥∥µ(i)z ∥∥∥2 2 + ∥∥∥σ(i)z ∥∥∥2 2 − log ∣∣∣∣diag [σ(i)z ]2∣∣∣∣ . (29) If we define L (mz, sz, w) as the VAE cost from (4) under the current parameterization, then by design it follows that happr(m̃z, s̃z, w̃) = L (m̃z, s̃z, w̃) (30) and happr(mz, sz, w) ≥ L (mz, sz, w) (31) whenever wσj ∈ {0, w̃σ̃j} for all j. Therefore if we find such a solution {m′z, s′z, w′} that satisfies this condition and has happr(m′z, s ′ z, w\n′) < happr(m̃z, s̃z, w̃), it necessitates that L(m′z, s′z, w′) < L(m̃z, s̃z, w̃) as well. This then ensures that {m̃z, s̃z, w̃} cannot be a local minimum. We now examine the function happr more closely. After a few algebraic manipulations and excluding irrelevant constants, we have that\nhappr(mz, sz, w) ≡ nd∑ j=1 { 1 γ [ wmz,j ∇ujf (u,v) ∣∣ uj=w̃m̃z,j + L2 ( w2m2z,j − 2wmz,jw̃m̃z,j ) + cjw 2s2z,j ] + m2z,j + s 2 z,j − log s2z,j } , (32)\nwhere cj is the coefficient on the v2 term from (26). After rearranging terms, optimizing outmz and sz , and discarding constants, we can then obtain (with slight abuse of notation) the reduced function\nhappr(w) , nd∑ j=1 yj γ + βw2 + log(γ + cjw 2), (33)\nwhere β , L2 and yj , L 2 ∥∥∥w̃m̃z,j − 1L ∇ujf (u,v)∣∣uj=w̃m̃z,j∥∥∥22. Note that yj must be bounded since L 6= 04 and w ∈ [0, 1], ∇ujf (u,v) ∣∣ uj=w̃m̃z,j\n≤ L, and m̃ are all bounded. The latter is implicitly bounded because the VAE KL term prevents infinite encoder mean functions. Furthermore, cj must be strictly greater than zero per the definition of a non-degenerate decoder; this guarantees that\ngappr ( w̃s̃j , w̃s̃j , ∇vjf (u,v) ∣∣ vj=w̃s̃j ) > gappr ( 0, w̃s̃j , ∇vjf (u,v) ∣∣ vj=w̃s̃j ) , (34)\nwhich is only possible with cj > 0. Proceeding further, because\n∇w2happr(w) = nd∑ j=1\n( −βyj\n(γ + βw2) 2 + cj γ + cjw2\n) , (35)\nwe observe that if γ is increased sufficiently large, the first term will always be smaller than the second since β and all yj are bounded, and cj > 0 ∀j. So there can never be a point whereby ∇w2happr(w) = 0 when γ = γ′ sufficiently large. Therefore the minimum in this situation occurs on the boundary where w2 = 0. And finally, if w2 = 0, then the optimal mz and sz is determined solely by the KL term, and hence they are set according to the prior. Moreover, the decoder has no signal from the encoder and is therefore optimized by simply setting µx ( 0; ψ̃ ) to the mean x̄ for\nall i.5 Additionally, none of this analysis requires and arbitrarily complex encoder; the exact same results hold as long as the encoder can output a 0 for means and 1 for the variances.\nNote also that if we proceed through the above analysis using w ∈ Rκ as parameterizing a separate wj scaling factor for each latent dimension j ∈ {1, . . . , κ}, then a smaller γ value would generally force partial collapse. In other words, we could enforce nonzero gradients of happr(w) along the indices of each latent dimension separately. This loosely criteria would then lead to qφ∗(zj |x) = p(zj) along some but not all latent dimensions as stated in the main text below Proposition 5.1." }, { "heading": "A.4 REPRESENTATIVE STATIONARY POINT EXHIBITING POSTERIOR COLLAPSE IN DEEP VAE MODELS", "text": "Here we provide an example of a stationary point that exhibits posterior collapse with an arbitrary deep encoder/decoder architecture. This example is representative of many other possible cases. Assume both encoder and decoder mean functions µx and µz , as well as the diagonal encoder covariance function Σz = diag[σ2z], are computed by standard deep neural networks, with layers composed of linear weights followed by element-wise nonlinear activations (the decoder covariance satisfies Σx = γI as before). We denote the weight matrix from the first layer of the decoder mean network as W 1µx , while w 1 µx,·j refers to the corresponding j-th column. Assuming ρ layers, we denote W ρµz and W ρ σ2z\nas weights from the last layers of the encoder networks producing µz and logσ2z respectively, with j-th rows defined asw ρ µz,j· andw ρ σ2z,j·\n. We then characterize the following key stationary point:\nProposition A.2 If w1µx,·j = ( wρµz,j· )> = ( wρσ2z,j· )> = 0 for any j ∈ {1, 2, . . . , κ}, then the gradients of (4) with respect to w1µx,·j , w ρ µz,j·, and w ρ σ2z,j· are all equal to zero.\n4L = 0 would violate the stipulated conditions for a non-degenerate decoder since it would imply that no signal from z could pass through the decoder. And of course if L = 0, we would already be at a solution exhibiting posterior collapse.\n5We are assuming here that the decoder has sufficient capacity to model any constant value, e.g., the output layer has a bias term.\nIf the stated weights are zero along dimension j, then obviously it must be that qφ(zj |x) = p(zj), i.e., a collapsed dimension for better or worse. The proof is straightforward; we provide the details below for completeness.\nProof: First we remind that the variational upper bound is defined in (2). We define L(x; θ, φ) as the loss at a data point x, i.e.\nL(x; θ, φ) = −Eqφ(z|x) [log pθ(x|z)] + KL [qφ(z|x)||p(z)] . (36)\nThe total loss is the integration of L(x; θ, φ) over x. Further more, we denote Lkl(x; θ) and Lgen(x; θ, φ) as the KL loss and the generation loss at x respectively, i.e.\nLkl(x;φ) = KL [qφ(z|x)||p(z)] = κ∑ i=1 KL [qφ(zj |x)||p(zj)] ,\n= 1\n2 κ∑ j=1 ( µ2z,j + σ 2 z,j − log σ2z,j − 1 ) (37)\nLgen(x;φ, θ) = −Eqφ(z|x) [log pθ(x|z)] . (38)\nThe second equality in (37) holds because the covariance of qφ(z|x) and p(z) are both diagonal. The last encoder layer and the first decoder layer are denoted as hρe and h 1 d. If w ρ µz,j· = 0,w ρ σ2z,j·\n= 0, then we have\nµz,j = w ρ µz,j·h ρ e = 0, σ 2 z,j = exp (wσ2z,j·) = 1, q(zj |x) = N (0, 1). (39)\nThe gradient of µz,j and σz,j from Lkl(x;φ) becomes ∂Lkl(x;φ) ∂µz,j = µz,j = 0, ∂Lkl(x;φ) ∂σz,j = 1− σ−1z,j = 0. (40)\nSo the gradient of wρµz,j· and w ρ σ2z,j· from Lkl is\n∂Lkl(x;φ) ∂wρµz,j· = ∂Lkl(x;φ) ∂µz,j hρe > = 0, (41)\n∂Lkl(x;φ) ∂wρσ2z,j· = ∂Lkl(x;φ) 2σz,j · ∂σz,j hρe > = 0. (42)\nNow we consider the gradient from Lgen(x; θ, φ). We have\n−∂ log pθ(x|z) ∂zj = −∂ log pθ(x|z) ∂h1d ∂h1d ∂zj . (43)\nSince\nh1d = act κ∑ j=1 w1µx,·jzj , (44) where act(·) is the activation function, we can obtain\n∂h1d ∂zj = act′ κ∑ j=1 w1µx,·jzj w1µx,·j = 0. (45) Plugging this back into (43) gives\n−∂ log pθ(x|z) ∂zj = 0. (46)\nAccording to the chain rule, we have\n∂Lgen(x; θ, φ) ∂wρµz,j· = Ez∼qφ(z|x)\n[ −∂ log pθ(x|z)\n∂zj\n∂zj ∂wρµz,j·\n] = 0, (47)\n∂Lgen(x; θ, φ) ∂wρσ2z,j· = Ez∼qφ(z|x)\n[ −∂ log pθ(x|z)\n∂zj\n∂zj ∂wρσ2z,j·\n] = 0. (48)\nAfter combining these two equations with (41) and (42) and then integrating over x, we have\n∂L(θ, φ) ∂wρµz,j· = 0, (49)\n∂L(θ, φ) ∂wρσ2z,j· = 0. (50)\nThen we consider the gradient with respect to w1µx,·j . Since wµx,·j is part of θ, it only receives gradient from Lgen(x; θ, φ). So we do not need to consider the KL loss. If w1µx,·j = 0, h\n1 d =∑κ\nj=1w 1 µx,·jzj is not related to zj . So pθ(x|z) = pθ(x|z¬j), where z¬j represents z without the\nj-th dimension. The gradient of w1µx,·j is\n∂Lgen(x; θ, φ) ∂w1µx,·j = Ez∼q(z|x)\n[ −∂ log pθ(x|z)\n∂w1µx,·j\n] = Ez∼q(z|x) [ −∂ log pθ(x|z)\n∂h1d zj\n] (51)\n= Ez¬j∼q(z¬j|x) [ Ezj∼N (0,1) [ −∂ log pθ(x|z¬j)\n∂h1d zj ]] = Ez¬i∼q(z¬i|x) [ −∂ log pθ(x|z¬j)\n∂h1d Ezj∼N (0,1)[zj ]\n] = 0.\nThe integration over x should also be 0. So we obtain\n∂L(θ;φ) ∂w1µx,·j = 0. (52)" } ]
2,019
null
SP:c8c5809f731c2f0c6bf01e24bc4d9eb7cf924ccd
[ "This is an interesting paper, as it tries to understand the role of hierarchical methods (such as Options, higher level controllers etc) in RL. The core contribution of the paper is understand and evaluate the claimed benefits often proposed by hierarchical methods, and finds that the core benefit in fact comes from exploration. The paper studies hierarchical methods to eventually draw the conclusion that HRL in fact leads to better exploration based behaviour in complex tasks. ", "This paper evaluates the benefits of using hierarchical RL (HRL) methods compared to regular shallow RL methods for fully observed MDPs. The goal of the work is to isolate and evaluate the benefits of using HRL on different control tasks (AntMaze, AntPush, AntBlock, AntBlockMaze). They find that the major benefit of HRL comes in the form of better exploration, compared to the ease of learning policies. They claim that the use of multi-step rewards alone is sufficient to provide the benefits associated with HRL. They also provide two exploration methods that are not hierarchical in nature but achieve similar performance: a) Explore and Exploit and b) Switching Ensemble." ]
Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks. Previous works have motivated the use of hierarchy by appealing to a number of intuitive benefits, including learning over temporally extended transitions, exploring over temporally extended periods, and training and exploring in a more semantically meaningful action space, among others. However, in fully observed, Markovian settings, it is not immediately clear why hierarchical RL should provide benefits over standard “shallow” RL architectures. In this work, we isolate and evaluate the claimed benefits of hierarchical RL on a suite of tasks encompassing locomotion, navigation, and manipulation. Surprisingly, we find that most of the observed benefits of hierarchy can be attributed to improved exploration, as opposed to easier policy learning or imposed hierarchical structures. Given this insight, we present exploration techniques inspired by hierarchy that achieve performance competitive with hierarchical RL while at the same time being much simpler to use and implement.
[ { "affiliations": [], "name": "SO WELL" } ]
[ { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "David Badre", "Michael J Frank" ], "title": "Mechanisms of hierarchical reinforcement learning in cortico– striatal circuits 2: Evidence from fmri", "venue": "Cerebral cortex,", "year": 2011 }, { "authors": [ "Adrien Baranes", "Pierre-Yves Oudeyer" ], "title": "Intrinsically motivated goal exploration for active motor learning in robots: A case study", "venue": "In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2010 }, { "authors": [ "Andrew G Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete Event Dynamic Systems,", "year": 2003 }, { "authors": [ "Matthew Michael Botvinick" ], "title": "Hierarchical reinforcement learning and decision making", "venue": "Current opinion in neurobiology,", "year": 2012 }, { "authors": [ "Emma Brunskill", "Lihong Li" ], "title": "Pac-inspired option discovery in lifelong reinforcement learning", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of Artificial Intelligence Research,", "year": 2000 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Ian Osband", "Alex Graves", "Vlad Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin" ], "title": "Noisy networks for exploration", "venue": "arXiv preprint arXiv:1706.10295,", "year": 2017 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "Dave Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Nicolas Heess", "Srinivasan Sriram", "Jay Lemmon", "Josh Merel", "Greg Wayne", "Yuval Tassa", "Tom Erez", "Ziyu Wang", "Ali Eslami", "Martin Riedmiller" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Nicholas K Jong", "Todd Hester", "Peter Stone" ], "title": "The utility of temporal abstraction in reinforcement learning. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, pp. 299–306", "venue": "International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2008 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Hierarchical learning in stochastic domains: Preliminary results", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Tejas D Kulkarni", "Karthik Narasimhan", "Ardavan Saeedi", "Josh Tenenbaum" ], "title": "Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Andrew Levy", "Robert Platt", "Kate Saenko" ], "title": "Hierarchical actor-critic", "venue": "arXiv preprint arXiv:1712.00948,", "year": 2017 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Long-Ji Lin" ], "title": "Self-improving reactive agents based on reinforcement learning, planning and teaching", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Marios C Machado", "Marc G Bellemare", "Michael Bowling" ], "title": "A laplacian framework for option discovery in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marlos C Machado", "Clemens Rosenbaum", "Xiaoxiao Guo", "Miao Liu", "Gerald Tesauro", "Murray Campbell" ], "title": "Eigenoption discovery through the deep successor representation", "venue": "arXiv preprint arXiv:1710.11089,", "year": 2017 }, { "authors": [ "Timothy Mann", "Shie Mannor" ], "title": "Scaling up approximate value iteration with options: Better policies with fewer iterations", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Amy McGovern", "Andrew G Barto" ], "title": "Automatic discovery of subgoals in reinforcement learning using diverse density", "venue": null, "year": 2001 }, { "authors": [ "Ofir Nachum", "Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Near-optimal representation learning for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1810.01257,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": null, "year": 1908 }, { "authors": [ "Ian Osband", "Benjamin Van Roy", "Zheng Wen" ], "title": "Generalization and exploration via randomized value functions", "venue": "arXiv preprint arXiv:1402.0635,", "year": 2014 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ronald Parr", "Stuart J Russell" ], "title": "Reinforcement learning with hierarchies of machines", "venue": "In Advances in neural information processing systems,", "year": 1998 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Doina Precup" ], "title": "Temporal abstraction in reinforcement learning", "venue": "University of Massachusetts Amherst,", "year": 2000 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes.: Discrete Stochastic Dynamic Programming", "venue": null, "year": 2014 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Planning simple trajectories using neural subgoal generators", "venue": "In From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior,", "year": 1993 }, { "authors": [ "Alexander L Strehl", "Lihong Li", "Michael L Littman" ], "title": "Reinforcement learning in finite mdps: Pac analysis", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1703.01161,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many real-world tasks may be decomposed into natural hierarchical structures. To navigate a large building, one first needs to learn how to walk and turn before combining these behaviors to achieve robust navigation; to wash dishes, one first needs to learn basic object grasping and handling before composing a sequence of these primitives to successfully clean a collection of plates. Accordingly, hierarchy is an important topic in the context of reinforcement learning (RL), in which an agent learns to solve tasks from trial-and-error experience, and the use of hierarchical reinforcement learning (HRL) has long held the promise to elevate the capabilities of RL agents to more complex tasks (Dayan & Hinton, 1993; Schmidhuber, 1993; Parr & Russell, 1998; Barto & Mahadevan, 2003).\nRecent work has made much progress towards delivering on this promise (Levy et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017; Nachum et al., 2019). For example, Nachum et al. (2018a;b; 2019) use HRL to solve both simulated and real-world quadrupedal manipulation tasks, whereas state-of-the-art non-hierarchical methods are shown to make negligible progress on the same tasks. Levy et al. (2017) demonstrate similar results on complex navigation tasks, showing that HRL can find good policies with 3-5x fewer environment interactions than non-hierarchical methods.\nWhile the empirical success of HRL is clear, the underlying reasons for this success are more difficult to explain. Prior works have motivated the use of HRL with a number of intuitive arguments: high-level actions are proposed at a lower temporal frequency than the atomic actions of the environment, effectively shortening the length of episodes; high-level actions often correspond to more semantically meaningful behaviors than the atomic actions of the environment, so both exploration and learning in this high-level action space is easier; and so on. These claims are easy to understand intuitively, and some may even be theoretically motivated (e.g., shorter episodes are indeed easier to learn; see Strehl et al. (2009); Azar et al. (2017)). On the other hand, the gap between any theoretical setting and the empirical settings in which these hierarchical systems excel is wide. Furthermore, in Markovian systems, there is no theoretical representational benefit to imposing temporally extended, hierarchical structures, since non-hierarchical policies that make a decision at every step can be optimal (Puterman, 2014). Nevertheless, the empirical advantages of hierarchy are self-evident in a number of recent works, which raises the question, why is hierarchy beneficial in these settings? Which of the claimed benefits of hierarchy contribute to its empirical successes?\nIn this work, we answer these questions via empirical analysis on a suite of tasks encompassing locomotion, navigation, and manipulation. We devise a series of experiments to isolate and evaluate the claimed benefits of HRL. Surprisingly, we find that most of the empirical benefit of hierarchy in our considered settings can be attributed to improved exploration. Given this observation, we propose a number of exploration methods that are inspired by hierarchy but are much simpler to use and implement. These proposed exploration methods enable non-hierarchical RL agents to achieve performance competitive with state-of-the-art HRL. Although our analysis is empirical and thus our conclusions are limited to the tasks we consider, we believe that our findings are important to the field of HRL. Our findings reveal that only a subset of the claimed benefits of hierarchy are achievable by current state-of-the-art methods, even on tasks that were previously believed to be approachable only by HRL methods. Thus, more work must be done to devise hierarchical systems that achieve all of the claimed benefits. We also hope that our findings can provide useful insights for future research on exploration in RL. Our findings show that exploration research can be informed by successful techniques in HRL to realize more temporally extended and semantically meaningful exploration strategies." }, { "heading": "2 RELATED WORK", "text": "Due to its intuitive and biological appeal (Badre & Frank, 2011; Botvinick, 2012), the field of HRL has been an active research topic in the machine learning community for many years. A number of different architectures for HRL have been proposed in the literature (Dayan & Hinton, 1993; Kaelbling, 1993; Parr & Russell, 1998; Sutton et al., 1999; Dietterich, 2000; Florensa et al., 2017; Heess et al., 2017). We consider two paradigms specifically – the options framework (Precup, 2000) and goal-conditioned hierarchies (Nachum et al., 2018b), due to their impressive success in recent work (Frans et al., 2018; Levy et al., 2017; Nachum et al., 2018a; 2019), though an examination of other architectures is an important direction for future research.\nOne traditional approach to better understanding and justifying the use of an algorithm is through theoretical analysis. In tabular environments, there exist bounds on the sample complexity of learning a near-optimal policy dependent on the number of actions and effective episode horizon (Brunskill & Li, 2014). This bound can be used to motivate HRL when the high-level action space is smaller than the atomic action space (smaller number of actions) or the higher-level policy operates at a temporal abstraction greater than one (shorter effective horizon). Previous work has also analyzed HRL (specifically, the options framework) in the more general setting of continuous states (Mann & Mannor, 2014). However, these theoretical statements rely on having access to near-optimal options, which are typically not available in practice. Moreover, while simple synthetic tasks can be constructed to demonstrate these theoretical benefits, it is unclear if any of these benefits actually play a role in empirical successes demonstrated in more complex environments. In contrast, our empirical analysis is specifically devised to isolate and evaluate the observed practical benefits of HRL.\nOur approach to isolating and evaluating the benefits of hierarchy via empirical analysis is partly inspired by previous empirical analysis on the benefits of options (Jong et al., 2008). Following a previous flurry of research, empirical demonstrations, and claimed intuitive benefits of options in the early 2000’s, Jong et al. (2008) set out to systematically evaluate these techniques. Similar to our findings, exploration was identified as a key benefit, although realizing this benefit relied on the use of specially designed options and excessive prior knowledge of the task. Most of the remaining observed empirical benefits were found to be due to the use of experience replay (Lin, 1992), and the same performance could be achieved with experience replay alone on a non-hierarchical agent. Nowadays, experience replay is an ubiquitous component of RL algorithms. Moreover, the hierarchical paradigms of today are largely model-free and achieve more impressive practical results than the gridworld tasks evaluated by Jong et al. (2008). Therefore, we present our work as a recalibration of the field’s understanding with regards to current state-of-the-art hierarchical methods." }, { "heading": "3 HIERARCHICAL REINFORCEMENT LEARNING", "text": "We briefly summarize the HRL methods and environments we evaluate on. We consider the typical two-layer hierarchical design, in which a higher-level policy solves a task by directing one or more lower-level policies. In the simplest case, the higher-level policy chooses a new high-level\naction every c timesteps.1 In the options framework, the high-level action is a discrete choice, indicating which of m lower-level policies (called options) to activate for the next c steps. In goalconditioned hierarchies, there is a single goal-conditioned lower-level policy, and the high-level action is a continuous-valued goal state which the lower-level is directed to reach.\nLower-level policy training operates differently in each of the HRL paradigms. For the options framework, we follow Bacon et al. (2017); Frans et al. (2018), training each lower-level policy to maximize environment reward. We train m separate Q-value functions to minimize errors,\nE(st, at, Rt, st+1) = (Qlo,m(st, at)−Rt − γQlo,m(st+1, πlo,m(st+1))2 , (1)\nover single-step transitions, and the m option policies are learned to maximize this Q-value Qlo,m(st, πlo,m(st)). In contrast, for HIRO (Nachum et al., 2018a) and HAC (Levy et al., 2017), the lower-level policy and Q-function are goal-conditioned. That is, a Q-function is learned to minimize errors,\nE(st, gt, at, rt, st+1, gt+1) = (Qlo(st, gt, at)− rt − γQlo(st+1, gt+1, πlo(st+1, gt+1))2 , (2)\nover single-step transitions, where gt is the current goal (high-level action updated every c steps) and rt is an intrinsic reward measuring negative L2 distance to the goal. The lower-level policy is then trained to maximize the Q-value Qlo(st, gt, πlo(st, gt)).\nFor higher-level training we follow Nachum et al. (2018a); Frans et al. (2018) and train based on temporally-extended c-step transitions (st, gt, Rt:t+c−1, st+c), where gt is a high-level action (discrete identifier for options, goal for goal-conditioned hierarchies) and Rt:t+c−1 = ∑c−1 k=0Rt+k is the c-step sum of environment rewards. That is, a Q-value function is learned to minimize errors,\nE(st, gt, Rt:t+c−1, st+c) = (Qhi(st, gt)−Rt:t+c−1 − γQhi(st+c, πhi(st+c)))2 . (3)\nIn the options framework where high-level actions are discrete, the higher-level policy is simply πhi(s) := argmaxg Qhi(s, g). In goal-conditioned HRL where high-level actions are continuous, the higher-level policy is learned to maximize the Q-value Qhi(s, πhi(s)).\nNote that higher-level training in HRL is distinct from the use of multi-step rewards or n-step returns (Hessel et al., 2018), which proposes to train a non-hierarchical agent with respect to transi-\n1We restrict our analysis to hierarchies using fixed c, although evaluating variable-length temporal abstractions are an important avenue for future work.\ntions (st, at, Rt:t+crew−1, st+crew); i.e., the Q-value of a non-HRL policy is learned to minimize,\nE(st, at, Rt:t+c−1, st+c) = (Q(st, at)−Rt:t+crew−1 − γQ(st+crew , π(st+crew))) 2 , (4)\nwhile the policy is learned to choose atomic actions to maximize Q(s, π(s)). In contrast, in HRL both the rewards and the actions gt used in the Q-value regression loss are temporally extended. However, as we will see in Section 5.2, the use of multi-step rewards alone can achieve almost all of the benefits associated with hierarchical training (controlling for exploration benefits).\nFor our empirical analysis, we consider four difficult tasks involving simulated robot locomotion, navigation, and object manipulation (see Figure 1). To alleviate issues of goal representation learning in goal-conditioned HRL, we fix the goals to be relative x, y coordinates of the agent, which are a naturally good representation for our considered tasks. We note that this is only done to better control our empirical analysis, and that goal-conditioned HRL can achieve good performance on our considered tasks without this prior knowledge (Nachum et al., 2018b). We present the results of two goal-conditioned HRL methods: HIRO (Nachum et al., 2018a) and HIRO with goal relabelling (inspired by HAC; Levy et al. (2017)) and an options implementation based on Frans et al. (2018) in Figure 1. HRL methods can achieve strong performance on these tasks, while non-hierarchical methods struggle to make any progress at all. In this work, we strive to isolate and evaluate the key properties of HRL which lead to this stark difference." }, { "heading": "4 HYPOTHESES OF THE BENEFITS OF HIERARCHY", "text": "We begin by listing out the claimed benefits of hierarchical learning. These hypotheses can be organized into several overlapping categories. The first set of hypotheses (H1 and H2 below) rely on the fact that HRL uses temporally extended actions; i.e., the high-level policy operates at a lower temporal frequency than the atomic actions of the environment. The second set (H3 and H4 below) rely on the fact that HRL uses semantically meaningful actions – high-level actions often correspond to more semantic behaviors than the natural low-level atomic actions exposed by the MDP. For example, in robotic navigation, the atomic actions may correspond to torques applied at the robot’s joints, while the high-level actions in goal-conditioned HRL correspond to locations to which the robot might navigate. In options, there are many paradigms which are explicitly designed to achieve better exploration (McGovern & Barto, 2001; Kulkarni et al., 2016; Machado et al., 2017a;b). In the more undirected form of options that we use, it is argued that semantic behaviors naturally arise from unsupervised specialization of behaviors (Frans et al., 2018). The four hypotheses may also be categorized as hierarchical training (H1 and H3) and hierarchical exploration (H2 and H4).\n(H1) Temporally extended training. High-level actions correspond to multiple environment steps. To the high-level agent, episodes are effectively shorter. Thus, rewards are propagated faster and learning should improve.\n(H2) Temporally extended exploration. Since high-level actions correspond to multiple environment steps, exploration in the high-level is mapped to environment exploration which is temporally correlated across steps. This way, an HRL agent explores the environment more efficiently. As a motivating example, the distribution associated with a random (Gaussian) walk is wider when the random noise is temporally correlated.\n(H3) Semantic training. High-level actor and critic networks are trained with respect to semantically meaningful actions. These semantic actions are more correlated with future values, and thus easier to learn, compared to training with respect to the atomic actions of the environment. For example, in a robot navigation task it is easier to learn future values with respect to deltas in x-y coordinates rather than robot joint torques.\n(H4) Semantic exploration. Exploration strategies (in the simplest case, random action noise) are applied to semantically meaningful actions, and are thus more meaningful than the same strategies would be if applied to the atomic actions of the environment. For example, in a robot navigation task it intuitively makes more sense to explore at the level of x-y coordinates rather than robot joint torques.\nDue to space constraints, see the Appendix for an additional hypothesis based on modularity." }, { "heading": "5 EXPERIMENTS", "text": "Our experiments are aimed at studying the hypotheses outlined in the previous section, analyzing which of the intuitive benefits of HRL are actually present in practice. We begin by evaluating\nthe performance of HRL when varying the length of temporal abstraction used for training and exploration (Section 5.1, H1 and H2), finding that although this has some impact on results, it is not enough to account for the stark difference between HRL and non-hierarchical methods observed in Figure 1. We then look at the training hypotheses more closely (Section 5.2, H1 and H3). We find that, controlling for exploration, hierarchical training is only useful so far as it utilizes multi-step rewards, and furthermore the use of multi-step rewards is possible with a non-hierarchical agent. Given this surprising finding, we focus on the exploration question itself (Section 5.3, H2 and H4). We propose two exploration strategies, inspired by HRL, which enable non-hierarchical agents to achieve performance competitive with HRL." }, { "heading": "5.1 EVALUATING THE BENEFITS OF TEMPORAL ABSTRACTION (H1 AND H2)", "text": "We begin by evaluating the merits of Hypotheses H1 and H2, both of which appeal to the temporally extended nature of high-level actions. In our considered hierarchies, temporal abstraction is a hyperparameter. Each high-level action operates for c environment time steps. Accordingly, the choice of c impacts two main components of learning:\n• During training, the higher-level policy is trained with respect to temporally extended transitions of the form (st, gt, Rt:t+c−1, st+c) (see Section 3 for details). • During experience collection, a high-level action is sampled and updated every c steps.\nThe first of these implementation details corresponds to H1 (temporally extended training) while the second corresponds to H2 (temporally extended exploration), and we can vary these two parameters independently to study the two hypotheses separately. Accordingly, we take HIRO, the best performing HRL method from Figure 1, and implement it so that these two instances of temporal abstraction are decoupled into separate choices ctrain for training horizon and cexpl for experience collection horizon. We evaluate performance across different choices of these two hyperparameters.\nThe results are presented in Figure 2, showing performance for different values of ctrain (top) and cexpl (bottom); recall that our baseline HRL method uses ctrain = cexpl = 10. The strongest effect of ctrain is observed in AntMaze and AntPush, where the difference between ctrain = 1 and ctrain > 1 is crucial to adequately solving these tasks. Otherwise, while there is some noticeable difference between specific choices of ctrain (as long as ctrain > 1), there is no clear pattern suggesting that a larger value of ctrain is better.\nFor cexpl, the effect seems slightly stronger. In AntMaze, there is no observed effect, while in AntPush, AntBlock, and AntBlockMaze there exists some correlation suggesting higher values of cexpl do yield better performance. Even so, cexpl = 1 is often able to make non-negligible progress towards adequately solving the tasks, as compared to a non-hierarchical shallow policy (Figure 1).\nOverall, these results provide intriguing insights into the impact of temporally abstracted training and exploration. While temporally extended training appears to help on these tasks, it is enough to have ctrain > 1. Temporally extended exploration appears to have a stronger effect, although it alone does not adequately explain the difference between an HRL agent that can solve the task and a non-hierarchical one that cannot make any progress. Where then does the benefit come from? In the next sections, we will delve deeper into the impact of hierarchy on training and exploration." }, { "heading": "5.2 EVALUATING THE BENEFITS OF HIERARCHICAL TRAINING (H1 AND H3)", "text": "The previous section suggested that temporally extended training (H1) has at most a modest impact on the performance of HRL. In this section, we take a closer look at the benefits of hierarchy on training and study Hypothesis H3, which suggests that high-level actions used by HRL are easier for learning as compared to the atomic actions of the MDP. In goal-conditioned hierarchies for example, H3 claims that it is easier for RL to learn policy and value functions based on delta x-y commands (goals), than it is to learn policy and value functions based on atomic joint-torque actions exposed by the environment. In this section we aim to isolate this supposed benefit from other confounding factors, such as potential exploration benefits. Therefore, we devise an experiment to disentangle exploration from action representation, by training a standard non-hierarchical agent (a shadow agent) on experience collected by a hierarchical agent. If the benefits of HRL stem primarily from exploration, we would expect the shadow agent to do well; if representation of high-level actions matters for training, we would expected HRL to do better.\nAccordingly, we augment our HRL implementation (specifically, HIRO) with an additional parallel shadow agent, represented by a standard single-level policy and value function. Each agent – the HRL agent and the non-hierarchical shadow agent – has its own replay buffer and collects its own experience from the environment. During training, we train the HRL agent as usual, while the shadow agent is trained on batches of experience gathered from both replay buffers (70% from the shadow agent’s experience and 30% from the HRL agent’s experience, chosen based on appropriate tuning). This way, any need for exploration is fulfilled by the experience gathered by the HRL agent. Will the non-hierarchical agent’s policy still be able to learn? Or does training with a higher-level that uses semantically meaningful high-level actions make learning easier?\nWe present the results of our experiments in Figure 3. While the potential impacts of Hypotheses H2 and H4 (exploration) are neutralized by our setup, the impact of Hypothesis H1 (which Section 5.1 showed has a modest impact) still remains. As an attempt to control for this factor, we also consider a setup where the non-hierarchical shadow agent receives multi-step rewards (see Section 3 for an overview of multi-step rewards). Different temporal extents for the multi-step rewards are indicated by crew in the figure legend.\nThe results in Figure 3 show that learning from atomic actions, without higher-level action representations, is feasible, and can achieve similar performance as HRL. On AntMaze, we observe a slight drop in performance, but otherwise performance is competitive with HRL. The results across different multi-step reward horizons crew also provide further insight into the conclusions of Section 5.1. As suggested by the results of Section 5.1, temporally abstracted training does affect performance, especially for AntMaze and AntPush. Still, while temporally abstracted training is important, these results show that the same benefit can be achieved by simply using multi-step rewards (which are much simpler to implement than using temporally extended actions). To confirm that multi-step rewards are not the only component necessary for success, see Figure 1, in which a non-hierarchical shallow agent with multi-step rewards is unable to make non-negligible progress on these tasks.\nOverall, we conclude that the high-level action representations used by HRL methods in our domains are not a core factor for the success of these methods, outside of their potential benefits for exploration. The only observed benefit of high-level action representations in training is due to the use of multi-step rewards, and this can be easily incorporated into non-hierarchical agent training." }, { "heading": "5.3 EVALUATING THE BENEFITS OF HIERARCHICAL EXPLORATION", "text": "The findings of the previous section show that training a non-hierarchical agent on ‘good’ experience (from a hierarchical agent) performs about as well as the hierarchical agent itself. If representing the policy and value function in terms of temporally extended, abstract actions is not crucial to achieving good performance, the next most-likely explanation is that the ‘good’ experience itself is the key. That is, good exploration is the key component to the success of HRL. This is the claim proposed by Hypotheses H2 (temporally extended exploration) and H4 (semantic exploration). In this section, we attempt to extend the experiments presented in Section 5.1 to better understand the impact of good exploration on the performance of a non-hierarchical agent. We will show that it is possible to enable non-hierarchical agents to achieve results competitive with HRL by using two exploration methods inspired by HRL: Explore & Exploit and Switching Ensemble.\nExplore & Exploit is inspired by the hypothesis that goal-reaching is a good exploration strategy independent of hierarchy (Baranes & Oudeyer, 2010). Thus, we propose training two non-hierarchical agents – one trained to maximize environment rewards (similar to the higher-level policy in HRL), and the other trained to reach goals (similar to the lower-level policy in goal-conditioned HRL). Unlike in HRL, each policy operates on the atomic actions of the environments, and the goal for the\nexplore agent is sampled randomly according to an Ornstein-Uhlenbeck process2 (standard deviation 5 and damping 0.8) as opposed to a learned policy. During experience collection, we randomly switch between the explore and exploit agents every cswitch timesteps. Specifically, every cswitch steps we randomly sample one of the two agents (with probability 0.2, 0.8 for the explore and exploit agents, respectively), and this chosen agent is used for sampling the subsequent cswitch atomic actions. Both agents share the same replay buffer for training. In this way, we preserve the benefits of goal-directed exploration – temporally extended and based on goal-reaching in a semantically meaningful space – without explicit hierarchies of policies.\nOur other proposed exploration method, Switching Ensemble, is inspired by the options framework, in which multiple lower-level policies interact to solve a task based on their shared experience. We propose a simple variant of this approach that removes the higher-level policy. We train multiple (specifically, five) non-hierarchical agents to maximize environment rewards. During experience collection, we choose one of these agents uniformly at random every cswitch timesteps. This way, we again maintain the spirit of exploration used in HRL – temporally extended and based on multiple interacting agents – while avoiding the use of explicit hierarchies of policies. This approach is related to the use of randomized value functions for exploration (Osband et al., 2014; 2016; Plappert et al., 2017; Fortunato et al., 2017) and may have a Bayesian interpretation (Gal & Ghahramani, 2016), although our proposal is unique for having a mechanism (cswitch) to control the temporally extended nature of the exploration. For both of these methods, we utilize multi-step environment rewards (crew = 3), which we found to work well in Section 5.2 (Figure 3).\nOur findings are presented in Figure 4. We find that the proposed alternatives are able to achieve performance similar to HRL, with the only exceptions being Explore & Exploit on AntBlockMaze and Switching Ensemble on AntPush. Overall, these methods are able to bridge the gap in empirical performance between HRL and non-hierarchical methods from Figure 1, confirming the importance of good exploration on these tasks. Notably, these results show the benefit of temporally extended exploration even for non-hierarchical agents – using cswitch > 1 is often significantly better than using cswitch = 1 (switching the agent every step). Furthermore, the good performance of Explore & Exploit suggests that semantic exploration (goal-reaching) is beneficial, and likely plays an important role in the success of goal-conditioned HRL methods. The success of Switching Ensemble further shows that an explicit higher-level policy used to direct multiple agents is not necessary in these environments.\nOverall, these results suggest that the success of HRL on these tasks is largely due to better exploration. That is, goal-conditioned and options-based hierarchies are better at exploring these environments as opposed to discovering high-level representations which make policy and value function training easier. Furthermore, these benefits can be achieved without explicit hierarchies of policies. Indeed, the results of Figure 4 show that non-hierarchical agents can achieve similar performance as state-of-the-art HRL, as long as they (1) use multi-step rewards in training and (2) use temporallyextended exploration (based on either goal-reaching or randomized value functions).\nBeyond the core analysis in our experiments, we also studied the effects of modularity – using separate networks to represent higher and lower-level policies. Due to space constraints, these results are presented in the Appendix. These experiments confirm that the use of separate networks is beneficial for HRL. We further confirm that using separate networks for the Explore & Exploit and Switching Ensemble methods is crucial for their effectiveness." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "Looking back at the initial set of hypotheses from Section 4, we can draw a number of conclusions based on our empirical analysis. In terms of the benefits of training, it is clear that training with respect to semantically meaningful abstract actions (H3) has a negligible effect on the success of HRL (as seen from our shadow experiments; Figure 3). Moreover, temporally extended training (H1) is only important insofar as it enables the use of multi-step rewards, as opposed to training with respect to temporally extended actions (Figure 3). The main, and arguably most surprising, benefit of hierarchy is due to exploration. This is evidenced by the fact that temporally extended\n2The use of OU noise for temporally-correlated exploration was first used by Lillicrap et al. (2015), where OU noise is added to the actions of a deterministic policy. In contrast, in our application, OU noise is used to determine a temporally correlated goal; i.e., the OU noise is used as input as opposed to added to the output of a policy.\ngoal-reaching and agent-switching can enable non-hierarchical agents to solve tasks that otherwise can only be solved, in our experiments, by hierarchical agents (Figure 4). These results suggest that the empirical effectiveness of hierarchical agents simply reflects the improved exploration that these agents can attain.\nThese conclusions suggest several future directions. First, our results show that current state-of-theart HRL methods only achieve a subset of their claimed benefits. More research needs to be done to fully realize all of the benefits, especially with respect to semantic and temporally extended training. Second, our results suggest that hierarchy can be used as an inspiration for better exploration methods, and we encourage future work to investigate more variants of the non-hierarchical exploration strategies we proposed.\nStill, our empirical analysis has limitations. Our results and conclusions are restricted to a limited set tasks and hierarchical designs. The use of other hierarchical designs may lead to different conclusions. Additionally, conclusions may be different for different task settings. For example, the use of hierarchy in multi-task settings may be beneficial for better transfer, a benefit that we did not evaluate. In addition, tasks with more complex environments and/or sparser rewards may benefit from other mechanisms for encouraging exploration (e.g., count-based exploration), which would be a complementary investigation to this study. An examination of different hierarchical structures and more varied settings is an important direction for future research." }, { "heading": "A TRAINING DETAILS", "text": "We provide a more detailed visualization and description of HRL (Figure 6)." }, { "heading": "B EVALUATING THE BENEFITS OF MODULARITY", "text": "We evaluate the merits of using modularity in HRL systems. We have already shown in the main text that a non-HRL agent can achieve performance similar to HRL. However, all of these non-HRL agents utilize multiple policies, similar to how HRL agents have separate lower-level and higherlevel policies.\nThus, we evaluate how important this modularity is. We evaluate HIRO as well as the successful non-hierarchical methods from Section 5.3 with and without separate networks. Specifically, for HIRO we combine the separate networks for lower and higher-level policies into a single network with multiple heads. For Explore & Exploit and Five Exploit we combine the separate networks for each policy into a single network with multiple heads. The results are presented in Figure 7. We see that combined networks consistently lead to worse performance than structurally separate networks. HIRO and Explore & Exploit are especially sensitive to this change, suggesting that Hypothesis H5 is true for settings using goal-conditioned hierarchy or exploration. Overall, the use of separate networks for goal-reaching and task solving is beneficial to the performance of these methods in these settings." }, { "heading": "C EXPERIMENT DETAILS", "text": "Our implementations are based on the open-source implementation of HIRO (Nachum et al., 2018a), using default hyperparameters. HIRO uses TD3 for policy training (Fujimoto et al., 2018), and so we train all non-hierarchical agents using TD3, with the same network and training hyperparameters as used by HIRO, unless otherwise stated.\nSince the choice of c in goal-conditioned HRL can also impact low-level training, as the frequency of new goals in recorded experience can affect the quality of the learned low-level behavior. To neutralize this factor in our ablations, we modify transitions (st, gt, at, rt, st+1, gt+1) used for lowlevel training by replacing the next goal gt+1 with the current goal gt; in this way the lower-level policy is trained as if the high-level goal is never changed. This implementation modification has a negligible effect on HRL’s performance with otherwise default settings.\nTo implement HIRO with goal relabelling, we augment the HIRO implementation with hindsight experience replay used for the lower-level policy. To implement Option-Critic (Bacon et al., 2017) in this framework, we create m = 5 separate lower-level policies trained to maximize reward (using n-step returns, where n = 3). We replace the higher-level continuous-action policy with a discrete double DQN-based agent, with -greedy exploration ( = 0.5).\nFor our exploration alternatives (Explore & Exploit and Switching Ensemble), we utilize multi-step environment rewards with crew = 3, which we found to work well in Section 5.2 (see Figure 3). We also found it beneficial to train at a lower frequency: we collect 2 environment steps per each training (gradient descent) step. To keep the comparisons fair, we train these variants for 5M training steps (corresponding to 10M environment steps, equal to that used by HRL)." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "" } ]
2,019
WHY DOES HIERARCHY (SOMETIMES) WORK
SP:385a392e6d055abd65a737f3c5be58105778ac11
[ "Stability is one of the important aspects of machine learning. This paper views Jacobian regularization as a scheme to improve the stability, and studies the behavior of Jacobian regularization under random input perturbations, adversarial input perturbations, train/test distribution shift, and simply as a regularization tool for the classical setting without any distribution shifts nor perturbations. There are already several related works that propose to use Jacobian regularization, but previous works didn’t have an efficient algorithm and also did not have theoretical convergence guarantee. This paper offers a solution that efficiently approximate the Frobenius norm of the Jacobian and also show the optimal convergence rate for the proposed method. Various experiments show that the behavior of Jacobian regularization and show that it is robust.", "The main contribution of this paper is that it proposed an estimator of Jacobian regularization term for neural networks to reduce the computational cost reduced by orders of magnitude, and the estimator is mathematically proved unbiased. In details, the time consumed for the application of Jacobian regularizer and the unbiasedness of the proposed estimator are proved mathematically. Then the author experimentally demonstrated that the proposed regularization term retains all the practical benefits of the exact method but with a low computation cost. Quantitative experiments are provided to illustrate that the proposed Jacobian regularizer does not adversely affect the model, can be used simultaneously with other regularizers and effectively improve the model's robustness against random and adversarial input perturbations." ]
Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks. The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data.
[]
[ { "authors": [ "Othmar H Amman", "Theodore von Kármán", "Glenn B Woodruff" ], "title": "The failure of the Tacoma Narrows bridge", "venue": "Report to the Federal Works Agency,", "year": 1941 }, { "authors": [ "Richard P Feynman", "Ralph Leighton" ], "title": "What do you care what other people think?\": further adventures of a curious character", "venue": "WW Norton & Company,", "year": 2001 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European Symposium on Security and Privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Justin Gilmer", "Ryan P Adams", "Ian Goodfellow", "David Andersen", "George E Dahl" ], "title": "Motivating the rules of the game for adversarial example research", "venue": "arXiv preprint arXiv:1807.06732,", "year": 2018 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Learning translation invariant recognition in a massively parallel networks", "venue": "In International Conference on Parallel Architectures and Languages Europe,", "year": 1987 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In Advances in neural information processing systems,", "year": 1992 }, { "authors": [ "Guodong Zhang", "Chaoqi Wang", "Bowen Xu", "Roger Grosse" ], "title": "Three mechanisms of weight decay regularization", "venue": "arXiv preprint arXiv:1810.12281,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Jure Sokolić", "Raja Giryes", "Guillermo Sapiro", "Miguel RD Rodrigues" ], "title": "Robust large margin deep neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: a large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Dániel Varga", "Adrián Csiszárik", "Zsolt Zombori" ], "title": "Gradient regularization improves accuracy of discriminative models", "venue": "arXiv preprint arXiv:1712.09936,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In Neural Information Processing Symposium,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A Abolafia", "Jeffrey Pennington", "Jascha SohlDickstein" ], "title": "Sensitivity and generalization in neural networks: an empirical study", "venue": "arXiv preprint arXiv:1802.08760,", "year": 2018 }, { "authors": [ "Jonathan J Hull" ], "title": "A database for handwritten text recognition research", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 1994 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Defensive distillation is not robust to adversarial examples", "venue": "arXiv preprint arXiv:1607.04311,", "year": 2016 }, { "authors": [ "Harris Drucker", "Yann LeCun" ], "title": "Double backpropagation increasing generalization performance", "venue": "In IJCNN-91-Seattle International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "Harris Drucker", "Yann LeCun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Chunchuan Lyu", "Kaizhu Huang", "Hai-Ning Liang" ], "title": "A unified gradient regularization family for adversarial examples", "venue": "In 2015 IEEE International Conference on Data Mining,", "year": 2015 }, { "authors": [ "Alexander G Ororbia II", "C Lee Giles", "Daniel Kifer" ], "title": "Unifying adversarial training algorithms with flexible deep data gradient regularization", "venue": "arXiv preprint arXiv:1601.07213,", "year": 2016 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Patrice Simard", "Bernard Victorri", "Yann LeCun", "John Denker" ], "title": "Tangent prop–a formalism for specifying selected invariances in an adaptive network", "venue": "In Advances in neural information processing systems,", "year": 1992 }, { "authors": [ "Tom M Mitchell", "Sebastian B Thrun" ], "title": "Explanation-based neural network learning for robot control", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Filipe Aires", "Michel Schmitt", "Alain Chedin", "Noëlle Scott" ], "title": "The “weight smoothing\" regularization of MLP for Jacobian stabilization", "venue": "IEEE Transactions on Neural Networks,", "year": 1999 }, { "authors": [ "Salah Rifai", "Pascal Vincent", "Xavier Muller", "Xavier Glorot", "Yoshua Bengio" ], "title": "Contractive autoencoders: explicit invariance during feature extraction", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yuichi Yoshida", "Takeru Miyato" ], "title": "Spectral norm regularization for improving the generalizability of deep learning", "venue": "arXiv preprint arXiv:1705.10941,", "year": 2017 }, { "authors": [ "Wojciech M Czarnecki", "Simon Osindero", "Max Jaderberg", "Grzegorz Swirszcz", "Razvan Pascanu" ], "title": "Sobolev training for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Daniel Jakubovitz", "Raja Giryes" ], "title": "Improving DNN robustness to adversarial attacks using Jacobian regularization", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Shixiang Gu", "Luca Rigazio" ], "title": "Towards deep neural network architectures robust to adversarial examples", "venue": "arXiv preprint arXiv:1412.5068,", "year": 2014 }, { "authors": [ "Benoît Collins", "Piotr" ], "title": "Śniady. Integration with respect to the Haar measure on unitary, orthogonal and symplectic group", "venue": "Communications in Mathematical Physics,", "year": 2006 }, { "authors": [ "Benoît Collins", "Sho Matsumoto" ], "title": "On some properties of orthogonal Weingarten functions", "venue": "Journal of Mathematical Physics,", "year": 2009 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Stability analysis lies at the heart of many scientific and engineering disciplines. In an unstable system, infinitesimal perturbations amplify and have substantial impacts on the performance of the system. It is especially critical to perform a thorough stability analysis on complex engineered systems deployed in practice, or else what may seem like innocuous perturbations can lead to catastrophic consequences such as the Tacoma Narrows Bridge collapse (Amman et al., 1941) and the Space Shuttle Challenger disaster (Feynman and Leighton, 2001). As a rule of thumb, well-engineered systems should be robust against any input shifts – expected or unexpected.\nMost models in machine learning are complex nonlinear systems and thus no exception to this rule. For instance, a reliable model must withstand shifts from training data to unseen test data, bridging the so-called generalization gap. This problem is severe especially when training data are strongly biased with respect to test data, as in domain-adaptation tasks, or when only sparse sampling of a true underlying distribution is available, as in few-shot learning. Any instability in the system can further be exploited by adversaries to render trained models utterly useless (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016a; Kurakin et al., 2016; Madry et al., 2017; Carlini and Wagner, 2017; Gilmer et al., 2018). It is thus of utmost importance to ensure that models be stable against perturbations in the input space.\nVarious regularization schemes have been proposed to improve the stability of models. For linear classifiers and support vector machines (Cortes and Vapnik, 1995), this goal is attained via an L2 regularization which maximizes classification margins and reduces overfitting to the training data. This regularization technique has been widely used for neural networks as well and shown to promote generalization (Hinton, 1987; Krogh and Hertz, 1992; Zhang et al., 2018). However, it remains unclear whether or not L2 regularization increases classification margins and stability of a network, especially for deep architectures with intertwining nonlinearity.\nIn this paper, we suggest ensuring robustness of nonlinear models via a Jacobian regularization scheme. We illustrate the intuition behind our regularization approach by visualizing the classification margins of a simple MNIST digit classifier in Figure 1 (see Appendix A for more). Decision cells of a neural network, trained without regularization, are very rugged and can be unpredictably unstable (Figure 1a). On average, L2 regularization smooths out these rugged boundaries but does not necessarily increase the size of decision cells, i.e., does not increase classification margins (Figure 1b). In contrast, Jacobian regularization pushes decision boundaries farther away from each training data point, enlarging decision cells and reducing instability (Figure 1c).\nThe goal of the paper is to promote Jacobian regularization as a generic scheme for increasing robustness while also being agnostic to the architecture, domain, or task to which it is applied. In\nsupport of this, after presenting the Jacobian regularizer, we evaluate its effect both in isolation as well as in combination with multiple existing approaches that are intended to promote robustness and generalization. Our intention is to showcase the ease of use and complimentary nature of our proposed regularization. Domain experts in each field should be able to quickly incorporate our regularizer into their learning pipeline as a simple way of improving the performance of their state-of-the-art system.\nThe rest of the paper is structured as follows. In Section 2 we motivate the usage of Jacobian regularization and develop a computationally efficient algorithm for its implementation. Next, the effectiveness of this regularizer is empirically studied in Section 3. As regularlizers constrain the learning problem, we first verify that the introduction of our regularizer does not adversely affect learning in the case when input data remain unperturbed. Robustness against both random and adversarial perturbations is then evaluated and shown to receive significant improvements from the Jacobian regularizer. We contrast our work with the literature in Section 4 and conclude in Section 5." }, { "heading": "2 METHOD", "text": "Here we introduce a scheme for minimizing the norm of an input-output Jacobian matrix as a technique for regularizing learning with stochastic gradient descent (SGD). We begin by formally defining the input-output Jacobian and then explain an efficient algorithm for computing the Jacobian regularizer using standard machine learning frameworks." }, { "heading": "2.1 STABILITY ANALYSIS AND INPUT-OUTPUT JACOBIAN", "text": "Let us consider the set of classification functions, f , which take a vectorized sensory signal, x ∈ RI , as input and outputs a score vector, z = f(x) ∈ RC , where each element, zc, is associated with likelihood that the input is from category, c.1 In this work, we focus on learning this classification function as a neural network with model parameters θ, though our findings should generalize to any parameterized function. Our goal is to learn the model parameters that minimize the classification objective on the available training data while also being stable against perturbations in the input space so as to increase classification margins.\n1Throughout the paper, the vector z denotes the logit before applying a softmax layer. The probabilistic output of the softmax pc relates to zc via pc ≡ e\nzc/T∑ c′ e z c′/T with temperature T , typically set to unity.\nThe input-output Jacobian matrix naturally emerges in the stability analysis of the model predictions against input perturbations. Let us consider a small perturbation vector, ∈ RI , of the same dimension as the input. For a perturbed input x̃ = x+ , the corresponding output values shift to\nz̃c = fc(x+ ) = fc(x) + I∑ i=1 i · ∂fc ∂xi (x) +O( 2) = zc + I∑ i=1 Jc;i(x) · i +O( 2) (1)\nwhere in the second equality the function was Taylor-expanded with respect to the input perturbation and in the third equality the input-output Jacobian matrix,\nJc;i(x) ≡ ∂fc ∂xi (x) , (2)\nwas introduced. As the function f is typically almost everywhere analytic, for sufficiently small perturbations the higher-order terms can be neglected and the stability of the prediction is governed by the input-output Jacobian." }, { "heading": "2.2 ROBUSTNESS THROUGH INPUT-OUTPUT JACOBIAN MINIMIZATION", "text": "From Equation (1), it is straightforward to see that the larger the components of the Jacobian are, the more unstable the model prediction is with respect to input perturbations. A natural way to reduce this instability then is to decrease the magnitude for each component of the Jacobian matrix, which can be realized by minimizing the square of the Frobenius norm of the input-output Jacobian,2\n||J(x)||2F ≡ ∑ i,c [Jc;i (x)] 2 . (3) For linear models, this reduces exactly to L2 regularization that increases classification margins of these models. For nonlinear models, however, Jacobian regularization does not equate to L2 regularization, and we expect these schemes to affect models differently. In particular, predictions made by models trained with the Jacobian regularization do not vary much as inputs get perturbed and hence decision cells enlarge on average. This increase in stability granted by the Jacobian regularization is visualized in Figure 1, which depicts a cross section of the decision cells for the MNIST digit classification problem using a nonlinear neural network (LeCun et al., 1998).\nThe Jacobian regularizer in Equation (3) can be combined with any loss objective used for training parameterized models. Concretely, consider a supervised learning problem modeled by a neural network and optimized with SGD. At each iteration, a mini-batch B consists of a set of labeled examples, {xα,yα}α∈B, and a supervised loss function, Lsuper, is optimized possibly together with some other regularizerR(θ) – such as L2 regularizer λWD2 θ\n2 – over the function parameter space, by minimizing the following bare loss function\nLbare ({xα,yα}α∈B;θ) = 1 |B| ∑ α∈B Lsuper [f(xα);yα] +R(θ) . (4)\nTo integrate our Jacobian regularizer into training, one instead optimizes the following joint loss\nLBjoint (θ) = Lbare({xα,yα}α∈B;θ) + λJR\n2\n[ 1\n|B| ∑ α∈B ||J(xα)||2F\n] , (5)\nwhere λJR is a hyperparameter that determines the relative importance of the Jacobian regularizer. By minimizing this joint loss with sufficient training data and a properly chosen λJR, we expect models to learn both correctly and robustly.\n2Minimizing the Frobenius norm will also reduce the L1-norm, since these norms satisfy the inequalities ||J(x)||F ≤ ∑ i,c ∣∣Jc;i (x) ∣∣ ≤ √IC||J(x)||F. We prefer to minimize the Frobenius norm over the L1-norm because the ability to express the former as a trace leads to an efficient algorithm [see Equations (6) through (8)]." }, { "heading": "2.3 EFFICIENT APPROXIMATE ALGORITHM", "text": "In the previous section we have argued for minimizing the Frobenius norm of the input-output Jacobian to improve robustness during learning. The main question that follows is how to efficiently compute and implement this regularizer in such a way that its optimization can seamlessly be incorporated into any existing learning paradigm. Recently, Sokolić et al. (2017) also explored the idea of regularizing the Jacobian matrix during learning, but only provided an inefficient algorithm requiring an increase in computational cost that scales linearly with the number of output classes, C, compared to the bare optimization problem (see explanation below). In practice, such an overhead will be prohibitively expensive for many large-scale learning problems, e.g. ImageNet classification has C = 1000 target classes (Deng et al., 2009). (Our scheme, in contrast, can be used for ImageNet: see Appendix H.)\nHere, we offer a different solution that makes use of random projections to efficiently approximate the Frobenius norm of the Jacobian.3 This only introduces a constant time overhead and can be made very small in practice. When considering such an approximate algorithm, one naively must trade off efficiency against accuracy for computing the Jacobian, which ultimately trades computation time for robustness. Prior work by Varga et al. (2017) briefly considers an approach based on random projection, but without providing any analysis on the quality of the Jacobian approximation. Here, we describe our algorithm, analyze theoretical convergence guarantees, and verify empirically that there is only a negligible difference in model solution quality between training with the exact computation of the Jacobian as compared to training with the approximate algorithm, even when using a single random projection (see Figure 2).\nGiven that optimization is commonly gradient based, it is essential to efficiently compute gradients of the joint loss in Equation (5) and in particular of the squared Frobenius norm of the Jacobian. First, we note that automatic differentiation systems implement a function that computes the derivative of a vector such as z with respect to any variables on which it depends, if the vector is first contracted with another fixed vector. To take advantage of this functionality, we rewrite the squared Frobienus norm as\n||J(x)||2F = Tr ( JJT ) = ∑ {e} eJJTeT = ∑ {e} [ ∂ (e · z) ∂x ]2 , (6)\nwhere a constant orthonormal basis, {e}, of the C-dimensional output space was inserted in the second equality and the last equality follows from definition (2) and moving the constant vector inside the derivative. For each basis vector e, the quantity in the last parenthesis can then be efficiently computed by differentiating the product, e · z, with respect to input parameters, x. Recycling that computational graph, the derivative of the squared Frobenius norm with respect to the model parameters, θ, can be computed through backpropagation with any use of automatic differentiation. Sokolić et al. (2017) essentially considers this exact computation, which requires backpropagating gradients through the model C times to iterate over the C orthonormal basis vectors {e}. Ultimately, this incurs computational overhead that scales linearly with the output dimension C.\nInstead, we further rewrite Equation (6) in terms of the expectation of an unbiased estimator ||J(x)||2F = C Ev̂∼SC−1 [ ||v̂ · J ||2 ] , (7)\nwhere the random vector v̂ is drawn from the (C − 1)-dimensional unit sphere SC−1. Using this relationship, we can use samples of nproj random vectors v̂µ to estimate the square of the norm as\n||J(x)||2F ≈ 1\nnproj nproj∑ µ=1 [ ∂ (v̂µ · z) ∂x ]2 , (8)\nwhich converges to the true value as O(n−1/2proj ). The derivation of Equation (7) and the calculation of its convergence make use of random-matrix techniques and are provided in Appendix B.\nFinally, we expect that the fluctuations of our estimator can be suppressed by cancellations within a mini-batch. With nearly independent and identically distributed samples in a mini-batch of size\n3In Appendix C, we give an alternative method for computing gradients of the Jacobian regularizer by using an analytically derived formula.\nAlgorithm 1 Efficient computation of the approximate gradient of the Jacobian regularizer. Inputs: mini-batch of |B| examples xα, model outputs zα, and number of projections nproj. Outputs: Square of the Frobenius norm of the Jacobian JF and its gradient∇θJF . JF = 0 for i = 1 to nproj do {vαc } ∼ N (0, I) . (|B|, C)-dim tensor with each element sampled from a standard normal. v̂α = vα/||vα|| . Uniform sampling from the unit sphere for each α. zflat = Flatten({zα}); vflat = Flatten({v̂α}) . Flatten for parallelism. Jv = ∂(zflat · vflat)/∂xα JF += C||Jv||2/(nproj|B|)\nend for ∇θJF = ∂JF /∂θ return JF ,∇θJF\n|B| 1, we expect the error in our estimate to be of order (nproj|B|)−1/2. In fact, as shown in Figure 2, with a mini-batch size of |B| = 100, single projection yields model performance that is nearly identical to the exact method, with computational cost being reduced by orders of magnitude.\nThe complete algorithm is presented in Algorithm 1. With a straightforward implementation in PyTorch (Paszke et al., 2017) and nproj = 1, we observed the computational cost of the training with the Jacobian regularization to be only ≈ 1.3 times that of the standard SGD computation cost, while retaining all the practical benefits of the expensive exact method.4" }, { "heading": "3 EXPERIMENTS", "text": "In this section, we evaluate the effectiveness of Jacobian regularization on robustness. As all regularizers constrain the learning problem, we begin by confirming that our regularizer effectively reduces the value of the Frobenius norm of the Jacobian while simultaneously maintaining or improving generalization to an unseen test set. We then present our core result, that Jacobian regularization provides significant robustness against corruption of input data from both random and adversarial perturbations (Section 3.2). In the main text we present results mostly with the MNIST dataset; the corresponding experiments for the CIFAR-10 (Krizhevsky and Hinton, 2009) and ImageNet (Deng et al., 2009) datasets are relegated to Appendices E and H. The following specifications apply throughout our experiments:\nDatasets: The MNIST data consist of black-white images of hand-written digits with 28-by-28 pixels, partitioned into 60,000 training and 10,000 test samples (LeCun et al., 1998). We preprocess the data by subtracting the mean (0.1307) and dividing by the variance (0.3081) of the training data.\n4The costs are measured on a single NVIDIA GP100 for the LeNet’ architecture on MNIST data. The computational efficiency depends on datasets and model architectures; the largest we have observed is a factor of ≈ 2 increase in computational time for ResNet-18 on CIFAR-10 (Appendix E), which is still of order one.\nImplementation Details: For the MNIST dataset, we use the modernized version of LeNet-5 (LeCun et al., 1998), henceforth denoted LeNet’ (see Appendix D for full details). We optimize using SGD with momentum, ρ = 0.9, and our supervised loss equals the standard cross-entropy with one-hot targets. The model parameters θ are initialized at iteration t = 0 by the Xavier method (Glorot and Bengio, 2010) and the initial descent value is set to 0. The hyperparameters for all models are chosen to match reference implementations: the L2 regularization coefficient (weight decay) is set to λWD = 5 · 10−4 and the dropout rate is set to pdrop = 0.5. The Jacobian regularization coefficient λJR = 0.01, is chosen by optimizing for clean performance and robustness on the white noise perturbation. (See Appendix G for performance dependence on the coefficient λJR.)" }, { "heading": "3.1 EVALUATING GENERALIZATION", "text": "The main goal of supervised learning involves generalizing from a training set to unseen test set. In dealing with such a distributional shift, overfitting to the training set and concomitant degradation in test performance is the central concern. For neural networks one of the most standard antidotes to this overfitting instability is L2 reguralization (Hinton, 1987; Krogh and Hertz, 1992; Zhang et al., 2018). More recently, dropout regularization has been proposed as another way to circumvent overfitting (Srivastava et al., 2014). Here we show how Jacobian regualarization can serve as yet another solution. This is also in line with the observed correlation between the input-output Jacobian and generalization performance (Novak et al., 2018).\nGeneralizing within domain: We first verify that in the clean case, where the test set is composed of unseen samples drawn from the same distribution as the training data, the Jacobian regularizer does not adversely affect classification accuracy. Table 1 reports performance on the MNIST test set for the LeNet’ model trained on either a subsample or all of the MNIST train set, as indicated. When learning using all 60,000 training examples, the learning rate is initially set to η0 = 0.1 with mini-batch size |B| = 100 and then decayed ten-fold after each 50,000 SGD iterations; each simulation is run for 150,000 SGD iterations in total. When learning using a small subsample of the full training set, training is carried out using SGD with full batch and a constant learning rate η = 0.01, and the model performance is evaluated after 10,000 iterations. The main observation is that optimizing with the proposed Jacobian regularizer or the commonly used L2 and dropout regularizers does not change performance on clean data within domain test samples in any statistically significant way. Notably, when few samples are available during learning, performance improved with increased regularization in the form of jointly optimizing over all criteria. Finally, in the right most column of Table 1, we confirm that the model trained with all data and regularized with the Jacobian minimization objective has an order of magnitude smaller Jacobian norm than models trained without Jacobian regularization. This indicates that while the model continues to make the same predictions on clean data, the margins around each prediction has increased as desired.\nGeneralizing to a new domain: We test the limits of the generalization provided by Jacobian regularization by evaluating an MNIST learned model on data drawn from a new target domain distribution – the USPS (Hull, 1994) test set. Here, models are trained on the MNIST data as above, and the USPS test dataset consists of 2007 black-white images of hand-written digits with\nTable 2: Generalization on clean test data from an unseen domain. LeNet’ models learned with all MNIST training data are evaluated for accuracy on data from the novel input domain of USPS test set. Here, each regularizer, including Jacobian, increases accuracy over an unregularized model. In addition, the regularizers may be combined for the strongest generalization effects. Averages and 95% confidence intervals are estimated over 5 distinct runs.\nNo regularization L2 Dropout Jacobian All Combined\n80.4± 0.7 83.3± 0.8 81.9± 1.4 81.3± 0.9 85.7± 1.0\n(a) White noise (b) PGD (c) CW\nFigure 3: Robustness against random and adversarial input perturbations. This key result illustrates that Jacobian regularization significantly increases the robustness of a learned model with LeNet’ architecture trained on the MNIST dataset. (a) Considering robustness under white noise perturbations, Jacobian minimization is the most effective regularizer. (b,c) Jacobian regularization alone outperforms an adversarial training defense (base models all include L2 and dropout regularization). Shades indicate standard deviations estimated over 5 distinct runs.\n16-by-16 pixels; images are upsampled to 28-by-28 pixels using bilinear interpolation and then preprocessed following the MNIST protocol stipulated above. Table 2 offers preliminary evidence that regularization, of each of the three forms studied, can be used to learn a source model which better generalizes to an unseen target domain. We again find that the regularizers may be combined to increase the generalization property of the model. Such a regularization technique can be immediately combined with state-of-the-art domain adaptation techniques to achieve further gains." }, { "heading": "3.2 EVALUATING UNDER DATA CORRUPTION", "text": "This section showcases the main robustness results of the Jacobian regularizer, highlighted in the case of both random and adversarial input perturbations.\nRandom Noise Corruption: The real world can differ from idealized experimental setups and input data can become corrupted by various natural causes such as random noise and occlusion. Robust models should minimize the impact of such corruption. As one evaluation of stability to natural corruption, we perturb each test input image x to x̃ = dx + ccrop where each component of the perturbation vector is drawn from the normal distribution with variance σnoise as\ni ∼ N (0, σ2noise), (9)\nand the perturbed image is then clipped to fit into the range [0, 1] before preprocessing. As in the domain-adaptation experiment above, models are trained on the clean MNIST training data and then tested on corrupted test data. Results in Figure 3a show that models trained with the Jacobian regularization is more robust against white noise than others. This is in line with – and indeed quantitatively validates – the embiggening of decision cells as shown in Figure 1.\nAdversarial Perturbations: The world is not only imperfect but also possibly filled with evil agents that can deliberately attack models. Such adversaries seek a small perturbation to each input example that changes the model predictions while also being imperceptible to humans. Obtaining the actual smallest perturbation is likely computationally intractable, but there exist many tractable approxima-\ntions. The simplest attack is the white-box untargeted fast gradient sign method (FGSM) (Goodfellow et al., 2014), which distorts the image as x̃ = dx+ ccrop with\ni = εFGSM · sign (∑ c ∂Lsuper ∂zc Jc;i ) . (10)\nThis attack aggregates nonzero components of the input-output Jacobian to a substantial effect by adding them up with a consistent sign. In Figure 3b we consider a stronger attack, projected gradient descent (PGD) method (Kurakin et al., 2016; Madry et al., 2017), which iterates the FGSM attack in Equation (10) k times with fixed amplitude εFGSM = 1/255 while also requiring each pixel value to be within 32/255 away from the original value. Even stronger is the Carlini-Wagner (CW) attack (Carlini and Wagner, 2017) presented in Figure 3c, which yields more reliable estimates of distance to the closest decision boundary (see Appendix F). Results unequivocally show that models trained with the Jacobian regularization is again more resilient than others. As a baseline defense benchmark, we implemented adversarial training, where the training image is corrupted through the FGSM attack with uniformly drawn amplitude εFGSM ∈ [0, 0.01]; the Jacobian regularization can be combined with this defense mechanism to further improve the robustness.5 Appendix A additionally depicts decision cells in adversarial directions, further illustrating the stabilizing effect of the Jacobian regularizer." }, { "heading": "4 RELATED WORK", "text": "To our knowledge, double backpropagation (Drucker and LeCun, 1991; 1992) is the earliest attempt to penalize large derivatives with respect to input data, in which (∂Lsuper/∂x)2 is added to the loss in order to reduce the generalization gap.6 Different incarnations of a similar idea have appeared in the following decades (Simard et al., 1992; Mitchell and Thrun, 1993; Aires et al., 1999; Rifai et al., 2011; Gulrajani et al., 2017; Yoshida and Miyato, 2017; Czarnecki et al., 2017; Jakubovitz and Giryes, 2018). Among them, Jacobian regularization as formulated herein was proposed by Gu and Rigazio (2014) to combat against adversarial attacks. However, the authors did not implement it due to a computational concern – resolved by us in Section 2 – and instead layer-wise Jacobians were penalized. Unfortunately, minimizing layer-wise Jacobians puts a stronger constraint on model capacity than minimizing the input-output Jacobian. In fact, several authors subsequently claimed that the layer-wise regularization degrades test performance on clean data (Goodfellow et al., 2014; Papernot et al., 2016b) and results in marginal improvement of robustness (Carlini and Wagner, 2017).\nVery recently, full Jacobian regularization was implemented in Sokolić et al. (2017), but in an inefficient manner whose computational overhead for computing gradients scales linearly with the number of output classes C compared to unregularized optimization, and thus they had to resort back to the layer-wise approximation above for the task with a large number of output classes. This computational problem was resolved by Varga et al. (2017) in exactly the same way as our approach (referred to as spherical SpectReg in Varga et al. (2017)). As emphasized in Section 2, we performed more thorough theoretical and empirical convergence analysis and showed that there is practically no difference in model solution quality between the exact and random projection method in terms of test accuracy and stability. Further, both of these two references deal only with the generalization property and did not fully explore strong distributional shifts and noise/adversarial defense. In particular, we have visualized (Figure 1) and quantitatively borne out (Section 3) the stabilizing effect of Jacobian regularization on classification margins of a nonlinear neural network.\n5We also tried the defensive distillation technique of Papernot et al. (2016b). While the model trained with distillation temperature T = 100 and attacked with T = 1 appeared robust against FGSM/PGD adversaries, it was fragile once attacked at T = 100 and thus cannot be robust against white-box attacks. This is in line with the numerical precision issue observed by Carlini and Wagner (2016).\n6This approach was slightly generalized in Lyu et al. (2015) in the context of adversarial defense; see also Ororbia II et al. (2016); Ross and Doshi-Velez (2018)." }, { "heading": "5 CONCLUSION", "text": "In this paper, we motivated Jacobian regularization as a task-agnostic method to improve stability of models against perturbations to input data. Our method is simply implementable in any open source automatic differentiation system, and additionally we have carefully shown that the approximate nature of the random projection is virtually negligible. Furthermore, we have shown that Jacobian regularization enlarges the size of decision cells and is practically effective in improving the generalization property and robustness of the models, which is especially useful for defense against input-data corruption. We hope practitioners will combine our Jacobian regularization scheme with the arsenal of other tricks in machine learning and prove it useful in pushing the (decision) boundary of the field and ensuring stable deployment of models in everyday life." }, { "heading": "A GALLERY OF DECISION CELLS", "text": "We show in Figure S1 plots similar to the ones shown in Figure 1 in the main text, but with different seeds for training models and around different test data points. Additionally, shown in Figure S2 are similar plots but with different scheme for hyperplane slicing, based on adversarial directions. Interestingly, the adversarial examples constructed with unprotected model do not fool the model trained with Jacobian regularization.\nFigure S2: Cross sections of decision cells in the input space for LeNet’ models trained on the MNIST dataset along adversarial hyperplanes. Namely, given a test sample (black dot), the hyperplane through it is spanned by two adversarial examples identified through FGSM, one for the model trained with L2 regularization λWD = 0.0005 and dropout rate 0.5 but no defense (dark-grey dot; left figure) and the other for the model with the same standard regularization methods plus Jacobian regularization λJR = 0.01 and adversarial training (white-grey dot; right figure)." }, { "heading": "B ADDITIONAL DETAILS FOR EFFICIENT ALGORITHM", "text": "Let us denote by Ev̂∼SC−1 [F (v̂)] the average of the arbitrary function F over C-dimensional vectors v̂ sampled uniformly from the unit sphere SC−1. As in Algorithm 1, such a unit vector can be sampled by first sampling each component vc from the standard normal distribution N (0, 1) and then normalizing it as v̂ ≡ v/||v||. In our derivation, the following formula proves useful:\nEv̂∼SC−1 [F (v̂)] = ∫ dµ(O)F (Oe) , (11)\nwhere e is an arbitrary C-dimensional unit vector and ∫\ndµ(O) [. . .] is an integral over orthogonal matrices O over the Haar measure with normalization ∫ dµ(O) [1] = 1.\nFirst, let us derive Equation (7). Using Equation (11), the square of the Frobenius norm can then be written as\n||J(x)||2F = Tr ( JJT ) ,\n= ∫ dµ(O)Tr ( OJJTOT ) ,\n= ∫ dµ(O) ∑ {e} eOJ JTOTeT ,\n= ∑ {e} Ev̂∼SC−1 [ v̂JJTv̂T ] ,\n= C Ev̂∼SC−1 [ v̂JJTv̂T ] , (12)\nwhere in the second line we insert the identity matrix in the form I = OTO and make use of the cyclicity of the trace; in the third line we rewrite the trace as a sum over an orthonormal basis {e} of the C-dimensional output space; in the forth line Equation (11) was used; and in the last line we note that the expectation no longer depends on the basis vectors e and perform the trivial sum. This completes the derivation of Equation (7).\nNext, let us compute the variance of our estimator. Using tricks as before, but in reverse order, yields var ( C v̂JJTv̂T ) ≡ C2 Ev̂∼SC−1 [( v̂JJTv̂T )2]− ||J(x)||4F , (13) = C2 ∫ dµ(O) [ eOJJTOTeTeOJJTOTeT ] − ||J(x)||4F .\nIn this form, we use the following formula (Collins and Śniady, 2006; Collins and Matsumoto, 2009) to evaluate the first term7∫\ndµ(O)Oc1c5O T c6c2Oc3c7O T c8c4 = (14)\nC + 1\nC(C − 1)(C + 2)\n( δc1c2δc3c4δc5c6δc7c8 + δc1c3δc2c4δc5c7δc6c8 + δc1c4δc2c3δc5c8δc6c7 ) − 1 C(C − 1)(C + 2) ( δc1c2δc3c4δc5c7δc6c8 + δc1c2δc3c4δc5c8δc6c7 + δc1c3δc2c4δc5c6δc7c8\n+δc1c3δc2c4δc5c8δc6c7 + δc1c4δc2c3δc5c6δc7c8 + δc1c4δc2c3δc5c7δc6c8\n) .\nAfter the dust settles with various cancellations, the expression for the variance simplifies to var ( C v̂JJTv̂T ) = 2C (C + 2) Tr ( JJTJJT ) − 2 (C + 2) ||J(x)||4F . (15)\nWe can strengthen our claim by using the relation ||AB||2F ≤ ||A||2F||B||2F with A = J and B = JT, which yields Tr ( JJTJJT ) ≤ ||J(x)||4F and in turn bounds the variance divided by the square of\nthe mean as var ( C v̂JJTv̂T ) [mean (C v̂JJTv̂T)] 2 ≤ 2 ( C − 1 C + 2 ) . (16)\n7We thank Nick Hunter-Jones for providing us with the inelegant but concretely actionable form of this integral.\nThe right-hand side is independent of J and thus independent of the details of model architecture and particular data set considered.\nIn the end, the relative error of the random-projection estimate for ||J(x)||2F with nproj random vectors will diminish as some order-one number divided by n−1/2proj . In addition, upon averaging ||J(x)||2F over a mini-batch of samples of size |B|, we expect the relative error of the Jacobian regularization term to be additionally suppressed by ∼ 1/ √ |B|.\nFinally, we speculate that in the large-C limit – possibly relevant for large-class datasets such as the ImageNet (Deng et al., 2009) – there might be additional structure in the Jacobian traces (e.g. the central-limit concentration) that leads to further suppression of the variance." }, { "heading": "C CYCLOPROPAGATION FOR JACOBIAN REGULARIZATION", "text": "It is also possible to derive a closed-form expression for the derivative of the Jacobian regularizer, thus bypassing any need for random projections while maintaining computational efficiency. The expression is here derived for multilayer perceptron, though we expect similar computations may be done for other models of interest. We provide full details in case one may find it practically useful to implement explicitly in any open-source packages or generalize it to other models.\nLet us denote the input xi and the output zc = z (L) c where (identifying {i} = {i0} = {1, . . . , I} and {c} = {iL} = {1, . . . , C})\nz (0) i0 ≡ xi0 , (17)\nẑ (`) i` = ∑ i`−1 w (`) i`,i`−1 z (`−1) i`−1 + b(`)i` for ` = 1, . . . , L (18) z\n(`) i`\n= σ ( ẑ\n(`) i`\n) for ` = 1, . . . , L . (19)\nDefining the layer-wise Jacobian as\nJ (`) i`,i`−1\n≡ ∂z\n(`) i`\n∂z (`−1) i`−1\n= σ′ ( ẑ\n(`) i`\n) w\n(`) i`,i`−1 (no summation) , (20)\nthe total input-output Jacobian is given by\nJiL,i0 ≡ ∂z\n(L) iL ∂zi0 = [ J (L)J (L−1) · · · J (1) ] iL,i0 . (21)\nThe Jacobian regularizer of interest is defined as (up to the magnitude coefficient λJR)\nRJR ≡ 1\n2 ||J ||2F ≡\n1\n2 ∑ i0,iL (JiL,i0) 2 = 1 2 Tr [ JTJ ] . (22)\nIts derivatives with respect to biases and weights are denoted as\nB̃ (`) j` ≡ ∂RJR ∂b\n(`) j`\n, (23)\nW̃ (`) j`,j`−1 ≡ ∂RJR ∂w\n(`) j`,j`−1\n. (24)\nSome straightforward algebra then yields\nB̃ (`) j` =\n[ B̃(`+1)\nσ′(ẑ(`+1)) J (`+1) ] j` σ′(ẑ (`) j` ) + σ′′ ( ẑ (`) j` ) σ′ ( ẑ (`) j` ) [J (`) · · · J (1) · JT · J (L) · · · J (`+1)] j`,j` ,\n(25)\nand\nW̃ (`) j`,j`−1 = B̃ (`) j` z (`−1) j`−1\n+ σ′ ( ẑ\n(`) j`\n) [ J (`−1) · · · J (1) · JT · J (L) · · · J (`+1) ] j`−1,j` , (26)\nwhere we have set B̃\n(L+1) jL+1 = J (L+1) jL+1 = 0 . (27)\nAlgorithmically, we can iterate the following steps for ` = L,L− 1, . . . , 1:\n1. Compute8\nΩ (`) j`−1,j`\n≡ [ J (`−1) · · · J (1) · JT · J (L) · · · J (`+1) ] j`−1,j` . (28)\n2. Compute\n∂R\n∂b (`) j`\n= B̃ (`) j` =\n[ B̃(`+1)\nσ′(ẑ(`+1)) J (`+1) ] j` σ′(ẑ (`) j` ) + σ′′ ( ẑ (`) j` )∑ j`−1 w (`) j`,j`−1 Ω (`) j`−1,j` . (29)\n3. Compute ∂R\n∂w (`) j`,j`−1\n= W̃ (`) j`,j`−1 = B̃ (`) j` z (`−1) j`−1\n+ σ′ ( ẑ\n(`) j`\n) Ω\n(`) j`−1,j` . (30)\nNote that the layer-wise Jacobians, J (`)’s, are calculated within the standard backpropagation algorithm. The core of the algorithm is in the computation of Ω(`)j`−1,j` in Equation (28). It is obtained by first backpropagating from `− 1 to 1, then forwardpropagating from 1 to L, and finally backpropagating from L to `+ 1. It thus makes the cycle around `, hence the name cyclopropagation." }, { "heading": "D DETAILS FOR MODEL ARCHITECTURES", "text": "In order to describe architectures of our convolutional neural networks in detail, let us associate a tuple [F,Cin → Cout, S, P ;M ] to a convolutional layer with filter width F , number of in-channels Cin and out-channels Cout, stride S, and padding P , followed by nonlinear activations and then a max-pooling layer of width M (note that M = 1 corresponds to no pooling). Let us also associate a pair [Nin → Nout] to a fully-connected layer passing Nin inputs into Nout units with activations and possibly dropout.\nWith these notations, our LeNet’ model used for the MNIST experiments consists of a (28, 28, 1) input followed by a convolutional layer with [5, 1→ 6, 1, 2; 2], another one with [5, 6→ 16, 1, 0; 2], a fully-connected layer with [2100 → 120] and dropout rate pdrop, another fully-connected layer with [120→ 84] and dropout rate pdrop, and finally a fully-connected layer with [84→ 10], yielding 10-dimensional output logits. For our nonlinear activations, we use the hyperbolic tangent.\nFor the CIFAR-10 dataset, we use the model architecture specified in the paper on defensive distillation (Papernot et al., 2016b), abbreviated as DDNet. Specifically, the model consists of a (32, 32, 3) input followed by convolutional layers with [3, 3 → 64, 1, 0; 1], [3, 64 → 64, 1, 0; 2], [3, 64→ 128, 1, 0; 1], and [3, 128→ 128, 1, 0; 2], and then fully-connected layers with [3200→ 256] and dropout rate pdrop, with [256→ 256] and dropout rate pdrop, and with [256→ 10], again yielding 10-dimensional output logits. All activations are rectified linear units.\nIn addition, we experiment with a version of ResNet-18 (He et al., 2016) modified for the 32-by-32 input size of CIFAR-10 and shown to achieve strong performance on clean image recognition.9 For this architecture, we use the standard PyTorch initialization of the parameters. Data preproceessing and optimization hyperparameters for both architectures are specified in the next section.\nFor our ImageNet experiments, we use the standard ResNet-18 model available within PyTorch (torchvision.models.resnet) together with standard weight initialization.\nNote that there is typically no dropout regularization in the ResNet models but we still examine the effect of L2 regularization in addition to Jacobian regularization.\n8For ` = 1, the part J(`−1) · · · J(1) is vacuous. Similarly, for ` = L, the part J(L) · · · J(`+1) is vacuous. 9Model available at: https://github.com/kuangliu/pytorch-cifar." }, { "heading": "E RESULTS FOR CIFAR-10", "text": "Following specifications apply throughout this section for CIFAR-10 experiments with DDNet and ResNet-18 model architectures (see Appendix D).\n• Datasets: the CIFAR-10 dataset consists of color images of objects – divided into ten categories – with 32-by-32 pixels in each of 3 color channels, each pixel ranging in [0, 1], partitioned into 50,000 training and 10,000 test samples (Krizhevsky and Hinton, 2009). The images are preprocessed by uniformly subtracting 0.5 and multiplying by 2 so that each pixel ranges in [−1, 1].\n• Optimization: essentially same as for the LeNet’ on MNIST, except the initial learning rate for full training. Namely, model parameters θ are initialized at iteration t = 0 by the Xavier method (Glorot and Bengio, 2010) for DDNet and standard PyTorch initialization for ResNet-18, along with the zero initial velocity v(t = 0) = 0. They evolve under the SGD dynamics with momentum ρ = 0.9, and for the supervised loss we use cross-entropy with one-hot targets. For training with the full training set, mini-batch size is set as |B| = 100, and the learning rate η is initially set to η0 = 0.01 for the DDNet and η0 = 0.1 for the ResNet-18 and in both cases quenched ten-fold after each 50,000 SGD iterations; each simulation is run for 150,000 SGD iterations in total. For few-shot learning, training is carried out using full-batch SGD with a constant learning rate η = 0.01, and model performance is evaluated after 10,000 iterations.\n• Hyperparameters: the same values are inherited from the experiments for LeNet’ on the MNIST and no tuning was performed. Namely, the weight decay coefficient λWD = 5·10−4; the dropout rate pdrop = 0.5; the Jacobian regularization coefficient λJR = 0.01; and adversarial training with uniformly drawn FGSM amplitude εFGSM ∈ [0, 0.01].\nThe results relevant for generalization properties are shown in Table S3. One difference from the MNIST counterparts in the main text is that dropout improves test accuracy more than L2 regularization. Meanwhile, for both setups the order of stability measured by ||J ||F on the test set more or less stays the same. Most importantly, turning on the Jacobian regularizer improves the stability by orders of magnitude, and combining it with other regularizers do not compromise this effect.\nThe results relevant for robustness against input-data corruption are plotted in Figures S3 and S4. The success of the Jacobian regularizer is retained for the white-noise and CW adversarial attack. For the PGD attack results are mixed at high degradation level when Jacobian regularization is combined with adversarial training. This might be an artifact stemming from the simplicity of the PGD search algorithm, which overestimates the shortest distance to adversarial examples in comparison to the CW attack (see Appendix F), combined with Jacobian regularization’s effect on simplifying the loss landscape with respect to the input space that the attack methods explore.\n(a) White noise (b) PGD (c) CW\nFigure S3: Robustness against random and adversarial input perturbations for DDNet models trained on the CIFAR-10 dataset. Shades indicate standard deviations estimated over 5 distinct runs. (a) Comparison of regularization methods for robustness to white noise perturbations. (b,c) Comparison of different defense methods against adversarial attacks (all models here equipped with L2 and dropout regularization).\n(a) White noise (b) PGD (c) CW\nFigure S4: Robustness against random and adversarial input perturbations for ResNet-18 models trained on the CIFAR-10 dataset. Shades indicate standard deviations estimated over 5 distinct runs. (a) Comparison of regularization methods for robustness to white noise perturbations. (b,c) Comparison of different defense methods against adversarial attacks (all models here equipped with L2 regularization but not dropout: see Appendix D).\n(a) Undefended; MNIST; LeNet’\n(b) Undefended; CIFAR-10; DDNet\n(c) Undefended; CIFAR-10; ResNet-18\n(d) Defended; MNIST; LeNet’\n(e) Defended; CIFAR-10; DDNet\n(f) Defended; CIFAR-10; ResNet-18\nFigure S5: Effects on test accuracy incurred by various modes of attacks. (a,d) LeNet’ on MNIST, (b,e) DDNet on CIFAR-10, and (c,f) ResNet-18 on CIFAR-10 trained (a,b,c) without defense and (d,e,f) with defense – Jacobian regularization magnitude λJR = 0.01 and adversarial training with εFGSM ∈ [0, 0.01] – all also include L2 regularization λWD = 0.0005 and (except ResNet-18) dropout rate 0.5." }, { "heading": "F WHITE NOISE VS. FGSM VS. PGD VS. CW", "text": "In Figure S5, we compare the effects of various input perturbations on changing model’s decision. For each attack method, fooling L2 distance in the original input space – before preprocessing – is measured between the original image and the fooling image as follows (for all attacks, cropping is performed to put pixels in the range [0, 1] in the orignal space): (i) for the white noise attack, a random direction in the input space is chosen and the magnitude of the noise is cranked up until the model yields wrong prediction; (ii) for the FGSM attack, the gradient is computed at a clean sample and then the magnitude εFGSM is cranked up until the model is fooled; (iii) for the PGD attack, the attack step with εFGSM = 1/255 is iterated until the model is fooled [as is customary for PGD and described in the main text, there is saturation constraint that demands each pixel value to be within 32/255 (MNIST) and 16/255 (CIFAR-10) away from the original clean value]; and (iv) the CW attack halts when fooling is deemed successful. Here, for the CW attack (see Carlini and Wagner (2017) for details of the algorithm) the Adam optimizer on the logits loss (their f6) is used with the learning rate 0.005, and the initial value of the conjugate variable, c, is set to be 0.01 and binary-searched for 10 iterations. For each model and attack method, the shortest distance is evaluated for 1,000 test samples, and the test error (= 100%− test accuracy) at a given distance indicates the amount of test examples misclassified with the fooling distance below that given distance.\nBelow, we highlight various notable features.\n• The most important highlight is that, in terms of effectiveness of attacks, CW > PGD > FGSM > white noise, duly respecting the complexity of the search methods for finding adversarial examples. Compared to CW attack, the simple methods such as FGSM and PGD attacks could sometime yield erroneous picture for the geometry of the decision cells, especially regarding the closest decision boundary.\n• The kink for PGD attack in Figure S5d is due to imposing saturation constraint that demands each pixel value to be within 32/255 away from the original clean value. We think that this constraint is unnatural, and impose it here only because it is customary. • While the CW attack fools almost all the examples for LeNet’ on MNIST and DDNet\non CIFAR-10, it fails to fool some examples for ResNet-18 on CIFAR-10 (and later on ImageNet: see Section H) beyond some distance. We have not carefully tuned the hyperparameters for CW attacks to resolve this issue in this paper." }, { "heading": "G DEPENDENCE ON JACOBIAN REGULARIZATION MAGNITUDE", "text": "In this appendix, we consider the dependence of our robustness measures on the Jacobian regularization magnitude, λJR. These experiments are shown in Figure S6. Cranking up the magnitude of Jacobian regularization, λJR, generally increases the robustness of the model, with varying degree of degradation in performance on clean samples. Typically, we can double the fooling distance without seeing much degradation. This means that in practice modelers using Jacobian regularization can determine the appropriate tradeoff between clean accuracy and robustness to input perturbations for their particular use case. If some expectation for the amount of noises the model might encounter is available, this can very naturally inform the choice of the hyperparameter λJR." }, { "heading": "H RESULTS FOR IMAGENET", "text": "ImageNet (Deng et al., 2009) is a large-scale image dataset. We use the ILSVRC challenge dataset (Russakovsky et al., 2015), which contains images each with a corresponding label classified into one of thousand object categories. Models are trained on the training set and performance is reported on the validation set. Data are preprocessed through subtracting the mean = [0.485, 0.456, 0.406] and dividing by the standard deviation, std = [0.229, 0.224, 0.225], and at training time, this preprocessing is further followed by random resize crop to 224-by-224 and random horizontal flip.\nResNet-18 (see Appendix D) is then trained on the ImageNet dataset through SGD with mini-batch size |B| = 256, momentum ρ = 0.9, weight decay λWD = 0.0001, and initial learning rate η0 = 0.1, quenched ten-fold every 30 epoch, and we evaluate the model for robusness at the end of 100 epochs. Our supervised loss equals the standard cross-entropy with one-hot targets, augmented with the Jacobian regularizer with λJR = 0, 0.0001, 0.0003, and 0.001.\nPreliminary results are reported in Figure S7. As is customary, the PGD attack iterates FGSM with εFGSM = 1/255 and has a saturation constraint that demands each pixel is within 16/255 of its original value; the CW attack hyperparameter is same as before and was not fine-tuned; [0, 1]- cropping is performed as usual, but as if preprocessing were performed with RGB-uniform mean shift 0.4490 and standard deviation division 0.2260. The Jacobian regularizer again confers robustness to the model, especially against adversarial attacks. Surprisingly, there is no visible improvement in regard to white-noise perturbations. We hypothesize that this is because the model is already strong against such perturbations even without the Jacobian regularizer, but it remains to be investigated further.\n(a) White; LeNet’ on MNIST (b) PGD; LeNet’ on MNIST (c) CW; LeNet’ on MNIST\n(d) White; DDNet on CIFAR-10 (e) PGD; DDNet on CIFAR-10 (f) CW; DDNet on CIFAR-10\n(g) White; ResNet-18 on CIFAR-10 (h) PGD; ResNet-18 on CIFAR-10 (i) CW; ResNet-18 on CIFAR-10\nFigure S6: Dependence of robustness on the Jacobian regularization magnitude λJR. Accuracy under corruption of input test data are evaluated for various models [base models all include L2 (λWD = 0.0005) regularization and, except for ResNet-18, dropout (rate 0.5) regularization]. Shades indicate standard deviations estimated over 5 distinct runs.\n(a) White; ResNet-18 on ImageNet (b) PGD; ResNet-18 on ImageNet (c) CW; ResNet-18 on ImageNet\nFigure S7: Dependence of robustness on the Jacobian regularization magnitude λJR for ImageNet. Accuracy under corruption of input test data are evaluated for ResNet-18 trained on ImageNet [base models include L2 (λWD = 0.0001)] for a single run. For CW attack in (c), we used 10,000 test examples (rather than 1,000 used for other figures) to compensate for the lack of multiple runs." } ]
2,019
null
SP:da1e92e9459d9f305f206e309faa8e9bbf8e6afa
[ "This paper proposes a multichannel generative language model (MGLM), which models the joint distribution p(channel_1, ..., channel_k) over k channels. MGLM can be used for both conditional generation (e.g., machine translation) and unconditional sampling. In the experiments, MGLM uses the Multi30k dataset where multiple high quality channels are available, in the form of multilingual translations.", "This work is an extension of KERMIT (Chan et al., 2019) to multiple languages and the proposed model is called “multichannel generative language models”. KERMIT is an extension of “Insertion Transformer” (Stern et. al, 2019), a non-autoregressive model that can jointly determine which word and which place the translated words should be inserted. KERMIT shares the encoder and decoder of insertion Transformer, and the source sentence and target sentence are concatenated to train a generative model (also, various loss functions are included). In this work, parallel sentences from more than two languages are concatenated together and fed into KERMIT. Each language is associated with a language embedding. This work demonstrates that a joint distribution p(x1, . . . , xk) over k channels/languages can be properly modeled through a single model. The authors carry out experiments on multi30k dataset." ]
A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which models the joint distribution over multiple channels, and all its decompositions using a single neural network. MGLM can be trained by feeding it k way parallel-data, bilingual data, or monolingual data across pre-determined channels. MGLM is capable of both conditional generation and unconditional sampling. For conditional generation, the model is given a fully observed channel, and generates the k − 1 channels in parallel. In the case of machine translation, this is akin to giving it one source, and the model generates k − 1 targets. MGLM can also do partial conditional sampling, where the channels are seeded with prespecified words, and the model is asked to infill the rest. Finally, we can sample from MGLM unconditionally over all k channels. Our experiments on the Multi30K dataset containing English, French, Czech, and German languages suggest that the multitask training with the joint objective leads to improvements in bilingual translations. We provide a quantitative analysis of the quality-diversity trade-offs for different variants of the multichannel model for conditional generation, and a measurement of self-consistency during unconditional generation. We provide qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages.
[]
[ { "authors": [ "Loı̈c Barrault", "Fethi Bougares", "Lucia Specia", "Chiraag Lala", "Desmond Elliott", "Stella Frank" ], "title": "Findings of the third shared task on multimodal machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers,", "year": 2018 }, { "authors": [ "William Chan", "Nikita Kitaev", "Kelvin Guu", "Mitchell Stern", "Jakob Uszkoreit" ], "title": "KERMIT: Generative Insertion-Based Modeling for Sequences, 2019", "venue": null, "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Khalil Sima’an", "Lucia Specia" ], "title": "Multi30k: Multilingual englishgerman image descriptions", "venue": "arXiv preprint arXiv:1605.00459,", "year": 2016 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Loı̈c Barrault", "Fethi Bougares", "Lucia Specia" ], "title": "Findings of the second shared task on multimodal machine translation and multilingual image description", "venue": "In Proceedings of the Second Conference on Machine Translation,", "year": 2017 }, { "authors": [ "Jiatao Gu", "Qi Liu", "Kyunghyun Cho" ], "title": "Insertion-based Decoding with Automatically Inferred Generation Order", "venue": "In arXiv,", "year": 2019 }, { "authors": [ "Tatsunori B Hashimoto", "Hugh Zhang", "Percy Liang" ], "title": "Unifying human and statistical evaluation for natural language generation", "venue": "arXiv preprint arXiv:1904.02792,", "year": 2019 }, { "authors": [ "Tianyu He", "Xu Tan", "Yingce Xia", "Di He", "Tao Qin", "Zhibo Chen", "Tie-Yan Liu" ], "title": "Layer-wise coordination between encoder and decoder for neural machine translation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Wouter Kool", "Herke Van Hoof", "Max Welling" ], "title": "Stochastic beams and where to find them: The Gumbel-top-k trick for sampling sequences without replacement", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Taku Kudo" ], "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "venue": "arXiv preprint arXiv:1804.10959,", "year": 2018 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": "arXiv preprint arXiv:1808.06226,", "year": 2018 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "arXiv preprint arXiv:1901.07291,", "year": 2019 }, { "authors": [ "Chia-Wei Liu", "Ryan Lowe", "Iulian V Serban", "Michael Noseworthy", "Laurent Charlin", "Joelle Pineau" ], "title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "venue": "arXiv preprint arXiv:1603.08023,", "year": 2016 }, { "authors": [ "Ehsan Montahaei", "Danial Alihosseini", "Mahdieh Soleymani Baghshah" ], "title": "Jointly measuring diversity and quality in text generation models", "venue": "arXiv preprint arXiv:1904.03971,", "year": 2019 }, { "authors": [ "Jekaterina Novikova", "Ondřej Dušek", "Amanda Cercas Curry", "Verena Rieser" ], "title": "Why we need new evaluation metrics for nlg", "venue": "arXiv preprint arXiv:1707.06875,", "year": 2017 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Mitchell Stern", "William Chan", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "Insertion Transformer: Flexible Sequence Generation via Insertion Operations", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc Le" ], "title": "Sequence to Sequence Learning with Neural Networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention Is All You Need", "venue": null, "year": 2017 }, { "authors": [ "Sean Welleck", "Kiante Brantley", "Hal Daume", "Kyunghyun Cho" ], "title": "Non-Monotonic Sequential Text Generation", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Yingce Xia", "Tianyu He", "Xu Tan", "Fei Tian", "Di He", "Tao Qin" ], "title": "Tied transformers: Neural machine translation with shared encoder and decoder", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Yaoming Zhu", "Sidi Lu", "Lei Zheng", "Jiaxian Guo", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Texygen: A benchmarking platform for text generation models", "venue": "In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "A natural way to consider two parallel sentences in different languages is that each language is expressing the same underlying meaning under a different viewpoint. Each language can be thought of as a transformation that maps an underlying concept into a view that we collectively agree is determined as ‘English’ or ‘French’. Similarly, an image of a cat and the word ‘cat’ are expressing two views of the same underlying concept. In this case, the image corresponds to a high bandwidth channel and the word ‘cat’ to a low bandwidth channel. This way of conceptualizing parallel viewpoints naturally leads to the formulation of a fully generative model over each instance, where the transformation corresponds to a particular generation of the underlying view. We define each of these views as a channel. As a concrete example, given a parallel corpus of English and French sentences, English and French become two channels and the corresponding generative model becomes p(English,French). One key advantage to this formulation is that single model can be trained that can capture the full expressivity of the underlying concept, allowing us to compute conditionals and marginals along with the joint. In the case of parallel sentences, the conditionals correspond to translations from one channel to another while the marginals correspond to standard monolingual language models.\nIn this work, we present a general framework for modeling the joint distribution p(x1, ...,xk) over k channels. Our framework marginalizes over all possible factorizations of the joint distribution. Subsequently, this allows our framework to perform, 1) unconditional generation and 2) conditional generation. We harness existing recent work on insertion-based methods that utilize semi-autoregressive models that are permutation-invariant to the joint factorization.\nSpecifically, we show a proof-of-concept multichannel modeling by extending KERMIT (Chan et al., 2019) to model the joint distribution over multiple sequence channels. Specifically, we train KERMIT on the Multi30K (Elliott et al., 2016) machine translation task, consisting of four lan-\nguages: English (EN), French (FR), Czech (CS), and German (DE). One advantage of multilingual KERMIT is during inference, we can generate translation for a single target language, or generate translations for k− 1 languages in parallel in logarithmic time in the token length per language. We illustrate qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages.\nThe key contributions in this work are:\n1. We present MGLM, a multichannel generative modeling framework. MGLM models the joint distribution p(x1, . . . ,xk) over k channels.\n2. We demonstrate both conditional generation (i.e., machine translation) and unconditional sampling from MGLM.\n3. In the case of conditional generation over multiple languages, we show that not only we are competitive in BLEU, but also with significant advantages in inference time and model memory savings.\n4. We analyze the Quality-Diversity tradeoff from sampling MGLM and prior work.\nWe highlight that while we focus on languages as a specific instantiation of a channel, our framework can generalize to any arbitrary specification, such as other types of languages or other modalities." }, { "heading": "2 BACKGROUND", "text": "Traditional autoregressive sequence frameworks (Sutskever et al., 2014; Cho et al., 2014) model the conditional probability p(y | x) of an output sequence y conditioned on the input sequence x with a left-to-right factorization. The model decomposes p(y | x) as predicting one output token at time, conditioning on the previously generated output tokens y<t and the input sequence x:\np(y | x) = ∏ t p(yt |,y<t) (1)\nRecent encoder-decoder models with attention such as Transformer (Vaswani et al., 2017) have been successfully applied to various domains, including machine translation. If we were to apply this left-to-right autoregressive approach towards multichannel modeling, we would require to choose a particular factorization order, such as p(w,x,y) = p(w)p(x|w)p(y|x,w). Instead of assuming a fixed left-to-right decomposition, recent autoregressive insertion-based conditional modeling frameworks (Stern et al., 2019; Welleck et al., 2019; Gu et al., 2019) consider arbitrary factorization of the output sequence by using insertion operation, which predicts both (1) content token c ∈ C from the vocabulary, and (2) location l insert, relative to the current partial output ŷt:\np(c, l|x, ŷt) = InsertionTransformer(x, ŷt) (2)\nSubsequent work, KERMIT (Chan et al., 2019), simplified the Insertion Transformer model by removing the encoder and only having a decoder, and the trick is to concatenate the original input and output sequence as one single sequence and optimize over all possible factorizations. Consequently, KERMIT is able to model the joint p(x,y), conditionals p(x | y), p(y | x), as well as the marginals p(x), p(y).\nUnlike with the left-to-right autoregressive approach, the exact computation of the log-likelihood equation 3 is not possible due to the intractable marginalization over the generation order z, where Sn denotes the set of all possible permutations on n elements. However, we can lower bound the log-likelihood using Jensen’s inequality:\nlog p(x) = log ∑ z∈Sn p(z)p(x | z) (3)\n≥ ∑ z∈Sn p(z) log p(x | z) =: L(x) (4)\nThe loss term can be simplified by changing the summation and careful decomposition of the permutation, leading to:\nL(x) = ∑ z∈Sn p(z) log n∏ i=1 p((czi , l z i ) | x z,i−1 1:i−1)\n= n∑ i=1 ∑ z1:i−1 p(z1:i−1) ∑ zi p(zi | z1:i−1) log p((czi , lzi ) | x z,i−1 1:i−1)\nInference can be autoregressive via greedy decoding:\n(ĉ, l̂) = argmax c,l\np(c, l|x̂t), (5)\nor partially autoregressive via parallel decoding:\nĉl = argmax c\np(c | l, x̂t), (6)\nwhich is achieved by inserting at all non-finished slots. Stern et al. (2019) has shown that using a binary tree prior for p(z) led to ≈ log2 n iterations for n token generation." }, { "heading": "3 MULTICHANNEL GENERATIVE LANGUAGE MODELS", "text": "In multichannel generative language modeling, our goal is to learn a generative model given a dataset consisting of a set of sequences {x(i)1 , . . . ,x (i) k }Mi=1 from up to k channels, where x (i) k = [x (i) j,1, . . . , x (i) j,n] represents a sequence of tokens from the j-th channel for the i-th example. The resulting MGLM models a joint generative distribution over multiple channels. While there are many possible implementation of Multichannel Generative Language Models, we chose to extended the work of Chan et al. (2019) to investigate applying the KERMIT objective on tasks with more than 2 sequences, in order to learn the joint distribution p(x1, . . . ,xk) over k channel sequences. For example, these channel sequences can denote different languages, such as learning p(EN,FR,CS,DE).\nData for Model TargetSource\nBilingual (Uni-direction)\nJoint [SEP] [SEP]EN FR CS [SEP] DE [SEP]\n[SEP] [SEP]EN FR CS [SEP] DE [SEP]\nMulti-Target (Any to Rest)\n[SEP] [SEP]EN FR CS [SEP] DE [SEP]\n[SEP] [SEP]EN FR CS [SEP] DE [SEP]\n[SEP] [SEP]EN FR CS [SEP] DE [SEP]\nLegend\n[SEP] [SEP]EN FR\nFR\n[SEP] [SEP]EN [SEP]\n[SEP] [SEP]EN DE [SEP]\n[SEP] [SEP]EN [SEP]\n[SEP]EN [SEP]\n[SEP]EN DE [SEP]\n[SEP]EN [SEP]\nTraining Sets Inference\nD ec\nod in g Ite ra tio n\n[SEP]EN [SEP]\n[SEP] [SEP]EN [SEP]\nD ec\nod in g Ite ra tio n\nWe illustrate an example data input consisting of 3 channels in Figure 1 (left). We concatenate the sequences together from all channels for each example, separate by a SEP token. Even with shared vocabulary, each channel results in a different token embedding, via an addition of a channel-specific (learnable) embedding, or simply having a separately learned token embedding per channel. After passing through the dense self-attention layers as in per Transformer architecture, the contextualized representation at each output time step predicts the possible tokens to be inserted to the left of the current input token.\nAt inference (generation) time, we can generate unconditionally by seeding the canvas with the [SEP] token and predicting the first actual token, or provide as much, or as little, partial/complete sequence in each channel. Figure 1 (right) shows two possible decoding inference modes: a single target language channel (top), or multiple target language channels in parallel (bottom)." }, { "heading": "4 EXPERIMENTS", "text": "We experiment on a multilingual dataset to demonstrate that we can learn MGLM. We perform both qualitative and quantitative experiments. We highlight the model’s capabilities ranging from conditional generation (i.e., machine translation) to unconditional sampling the joint distribution over multiple languages.\nWe experiment on the Multi30k (Elliott et al., 2016; 2017; Barrault et al., 2018), a multilingual dataset which consists of 29000 parallel training sentences in English (EN), French (FR), Czech (CS), and German (DE) sentences. We use Multi30k because multiple high quality channels (multilingual translations in this case) is readily available to highlight our framework. We implement MGLM as a base Transformer decoder, without any causal masking, with 6 hidden layers and 1024 dimensional hidden representation. We concatenate all 4 language raw text training examples and use SentencePiece (Kudo & Richardson, 2018) to learn an universal subword unigram (Kudo, 2018) tokenizer with a shared 32K vocabulary size. We follow a similar training set up to BERT (Devlin et al., 2019), using Adam (Kingma & Ba, 2015) optimizer with learning rate of 1e-4, warmup over the first 10% of the total training iterations varying between 10k to 50k iterations. We can train 3 different variants of MGLM by altering the sampling ratio of training data seen by the model:\n1. Bilingual (e.g., EN→ FR). We give the model a fully observed source (e.g., EN ), and ask the model to infill the target (e.g., FR).\n2. Multi-target (e.g., any 1→ Rest). We give the model a fully observed source (e.g., EN ), and ask the model to infill the rest of the targets (e.g., DE, FR, CS).\n3. Joint. We ask the model to infill all the targets, consequently we learn a joint distribution over all the languages p(en, fr,de, cs)." }, { "heading": "4.1 TRANSLATION PERFORMANCE", "text": "The goal of MGLM is not conditional generation (i.e., machine translation), but nevertheless, we demonstrate its ability to do conditional generation in this section. We report the BLEU scores on the three test sets: test 2016 Flickr, test 2017 Flickr, test 2017 MSCOCO, for different English → {German, French, Czech} translations. We use parallel greedy decoding (Stern et al., 2019; Chan et al., 2019), i.e. inserting to all incomplete slots. Table 1 summarizes the results for English to German and vice versa, respectively. Additional results for English to French, English to Czech, and German to English are shown in Appendix A.2. We observe that the Multitarget models performed similar to slightly better than the bilingual models trained only on a single language pair. This is particularly useful when multiple machine translation targets are desired. We now only need one MGLM model which is competitive to the bidirectional expert models. This implies we only need 1 model for inference over multiple languages, as opposed to N models (i.e., saving substantial memory).\nWe also observe the full generative joint model has a BLEU gap compared to the bilingual baseline, which is consistent with the findings in Chan et al. (2019). We hypothesize this is due to the joint distribution being a more challenging task. We further hypothesize that in particular, during training the Joint model needs to fantasize additional details when conditioning on partial sequence in each of the channels. This results in fantasizing additional details not present in the original source sentence during translation tasks." }, { "heading": "4.2 PARALLEL GREEDY DECODING: PARALLEL IN TARGET LANGUAGES", "text": "As alluded conceptually in Figure 1 and in the previous section, our KERMIT-based MGLM is also able to perform parallel greedy decoding that is also parallel in number of target languages. We illustrate this process in Figure 2. By starting with K initial [SEP] tokens for K target output languages, MGLM can decode K target languages that has at most n output tokens per language in O(log n), i.e. constant in number of target languages. We investigate the relative speed up in generating multiple target language outputs in parallel versus generating the targets in series, in terms of wall-clock time and number of decoding iterations. In Figure 3a, we plot the number of decoding iterations taken versus the total output length N for each\nsentence in the test 2016 Flickr test set, using the Joint KERMIT model when decoding from a single source language to 3 target languages: English→ {French, German, Czech}. When performing serial target decoding, we only output the target conditioned on English, i.e. English→ French, English→ German, English→ Czech. We also plot several theoretical bounds: (1) upper bound (N ) when decoding entirely serially, (2) lower bound 3(blog2(N/3)c + 2) when decoding 3 languages serially but parallel within each language, (3) lower bound blog2(N/3)c + 2, when decoding the 3 target languages in parallel and parallel within each language, and (4) blog2(N)c+2, if we decode the entire output in parallel as a single sequence. We observe that our model is able to meet the lower bound several times and in many cases decode below the fourth blog2(N)c + 2 bound. Figure 3b compares the wall-clock speed up when decoding targets in parallel vs. in series, with a linear regression line plotted. Our model achieving almost 3 times speed up in wall-clock speed. The parallel targets decoding is bottlenecked by the target language with the longest output sequence. Figure 3c compares the total output length when decoding the targets in series versus in parallel. We observe that there is a linear relationship between the output lengths using the two modes." }, { "heading": "4.3 CONDITIONAL BILINGUAL GENERATION: QUALITY-DIVERSITY TRADE-OFF", "text": "We first evaluated the models on conditional generation task by sampling bilingual translations (1 source, 1 target language) for each of the 12 language pair directions. We sample the token and location (c, l) ∼ p(c, l|x, ŷ) from the partial canvas at each iteration, generating 100 hypothesis\ntranslations per source sentence, at softmax temperature τ = 0.1, 0.5, 1.0. At each temperature and model, we computed the quality of the generated samples by computing the BLEU Papineni et al. (2002) score between the reference translation and the samples, and the diversity by computing the pairwise BLEU between the 100 samples per source, also known as Self-BLEU Zhu et al. (2018). Lower Self-BLEU indicates the higher the diversity as there is less overlap between the samples.\nFigure 4 illustrates the Quality-Diversity trade-off for the three models for different translation pairs involving English as one of the language. The top right portion of the graph is the ideal area. We observed that the Multitarget model outperformed the Bilingual model at lower temperature (both higher quality and diversity), and at higher temperature slightly above or below in quality but still higher diversity. Note that only one single Multitarget model was used for all language pair at inference time, while each bilingual model was different for each language pair curve. Therefore, a single Multitarget KERMIT model could outperform specialized bilingual KERMIT models." }, { "heading": "4.4 PARTIAL CONDITIONING MULTILINGUAL GENERATION", "text": "We demonstrate our model’s ability to generate infilling for partial conditioning over the multiple channels. To be explicit, we seed each channel with a few (different) words, and sample from the model. We ask the model what text completions would best fit under the model’s posterior. Figure 5 highlights several examples for (English, French, German) sentence completion. We took an example from the test 2016 Flickr test set and split it into 3 chunks–beginning in English,\nmiddle in French, and ending in German–and sample completion. The model is able to generate a set of diverse, coherent examples.\n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . English Groundtruth: A young boy, wearing a chef’s hat and apron, is cutting sausages in a kitchen. French Groundtruth: Un jeune garçon, portant une toque et un tablier, coupe des saucisses dans une cuisine. German Groundtruth: Ein kleiner Junge mit Kochmütze und Schürze schneidet in einer Küche Würstchen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . English Seed: A young boy, French Seed: portant une toque et un tablier, German Seed: chneidet in einer Küche Würstchen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . English: A young boy , wearing a hat , and an apron grilling hotdogs in the kitchen. French: Un jeune garçon portant une toque et un tablier, faisant cuire du citron et des hotdogs dans la cuisine. German: Ein junger Mann trägt eine Mütze und schneidet in einer Küche Würstchen. English: A young boy , wearing a hat and a apron, is in a kitchen , cutting with various foods on it. French: Un jeune garçon, portant une toque et un tablier, est dans une cuisine en projetant des poêles de la nourriture. German: Ein kleiner Junge mit Hut und Schürze schneidet in einer Küche Würstchen. English: A young boy, wearing an orange hat and apron, puts barbecue chicken in a kitchen. French: Un jeune garçon, portant une toque et un tablier, coupant du poulet dans une cuisine. German: Ein kleiner Junge in einer weißen Mütze und mit Schürze schneidet in einer Küche Würstchen glas . English: A young boy, wearing a blue hat and apron, is cooking meat in a kitchen. French: Un petit garçon, portant une toque et un tablier, fait la cuisine dans une cuisine. German: Ein kleiner Junge mit blauer Mütze und schneidet in einer Küche Würstchen.\nFigure 5: Partially conditional generation samples drawn from our model. The seed text is shown in gray, with several different in-filling samples from the model in black. The samples show reasonable consistency and diversity across samples." }, { "heading": "4.5 UNCONDITIONAL MULTILINGUAL GENERATION", "text": "We then evaluated the models on unconditional multilingual generation task, to generate a sentence each in all 4 languages such that they correspond to each other. For the Joint model, we perform 3 types of sampling: (1) unrestricted, (2) chain, and (3) common cause. For unrestricted, we sampled one (token, location) at each iteration starting from an empty canvas, allowing the model to insert a token in any language, until all slots were marked as completed. In the chain generation, we first restrict to generating English sentence one token at a time, then sampled French, German, and Czech in order, conditioned on the last sentence in the previous language. For common cause, we reuse the same English and French sampled sentences, and generate the German and Czech conditioned on the English sentence (i.e. 3 languages are all conditioned on English).\nGiven these sets of sentences in 4 languages, for each pair of language direction, we computed a pseudo target by using a separately trained (on Multi30k) vanilla Transformer (Vaswani et al., 2017) and performed beam search (size 5) to translate the chosen source language sample. Figure 6 visualizes the pseudo target BLEU score for different source-target language pairs when comparing the Joint model under different types of sampling. The shaded colour represents the difference between the current sampling scheme versus the unrestricted reference. We observe that letting the model sample in unrestricted order was better than either the chain or the common cause sampling." }, { "heading": "5 RELATED WORK", "text": "While we have demonstrated a KERMIT implementation of a MGLM, many other variants of Transformer models contain similar properties. Xia et al. (2019) and He et al. (2018) both consider shared encoder/decoders while KERMIT removes altogether the distinction between the encoder and decoder. XLNet (Yang et al., 2019) also learns over all permutation of the factorization order, in addition to architectural modification for two-stream attention parameterization to resolve ambiguity in the targets. The idea of concatenating pairs of source and target sequences from different language channels have been explored by Lample & Conneau (2019). However, unlike the insertion\nobjective, their model is trained through Masked Language Modeling as in BERT (Devlin et al., 2019), and therefore was not readily able to be used for generation.\nEvaluation of text generative models remain a challenge (Liu et al., 2016; Novikova et al., 2017). Quality versus diversity plots have been used to compare the trade-off at different output softmax temperatures, as such in Stochastic Beam Search (Kool et al., 2019) which used a simpler n-gram diversity instead of Self-BLEU (Zhu et al., 2018). However, we are the first to characterize the Q-D behaviour of insertion based models, versus existing left-to-right language models. Other metrics summarize the quality and diversity trade-off as a single number, such as Frechet BERT Distance (Montahaei et al., 2019) inspired by the FID score (Heusel et al., 2017) used in computer vision, or take into account human evaluation (Hashimoto et al., 2019)." }, { "heading": "6 CONCLUSION", "text": "We have demonstrated that a multichannel model implemented with KERMIT can learn a joint distribution over more than two sequences. Furthermore, our multichannel KERMIT model allows for efficient inference of multiple target languages in parallel using a single model. Our work focused on a specific instantiation of channels in the case of languages. However, there are no model limitations that inhibit further generalization to other notion of channels. In future work we aim to consider the addition of multimodal channels, such as images as well as other textual channels, such as paraphrases, premises and hypotheses, as well as questions and answers. Fully generative models still often lag behind purely discriminitive counterparts in terms of performance, but we believe it is crucial to make steps towards other model formulations that have high potential. We also intend to explore the limits on the number of channels that can be considered, such as building generative models over dozens or even hundereds of languages. We hope this initial line of work motivates future research on building generative models of the world." }, { "heading": "A APPENDICES", "text": "A.1 ADDITIONAL QUALITY-DIVERSITY CURVES FOR CONDITIONAL GENERATION\nA.2 ADDITIONAL MULTI30K TRANSLATION RESULTS\nA.3 UNCONDITIONAL SAMPLING GENERATION\nFigure 10 illustrates the serial sampling (one token at a time) from the joint model, every 20 timesteps" } ]
2,019
MULTICHANNEL GENERATIVE LANGUAGE MODELS
SP:69704bad659d8cc6e35dc5b7f372bf2e39805f4f
[ "This paper studies the convergence of multiple methods (Gradient, extragradient, optimistic and momentum) on a bilinear minmax game. More precisely, this paper uses spectral condition to study the difference between simultaneous (Jacobi) and alternating (Gau\\ss-Seidel) updates. The analysis is based on Schur theorem and give necessary and sufficient condition for convergence. ", "The paper presents exact conditions for the convergence of several gradient based methods for solving bilinear games. In particular, the methods under study are Gradient Descent(GD), Extragradient (EG), Optimizatic Gradient descent (OGD) and Momentum methods. For these methods, the authors provide convergence rates (with optimal parameter setup) for both alternating (Gauss-Seidel) and simultaneous (Jacobi) updates. " ]
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge. As a first step, we restrict to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our results offer formal evidence that alternating updates converge “better” than simultaneous ones.
[ { "affiliations": [], "name": "ZERO-SUM GAMES" }, { "affiliations": [], "name": "Guojun Zhang" }, { "affiliations": [], "name": "Yaoliang Yu" } ]
[ { "authors": [ "M. Arjovsky", "S. Chintala", "L. Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "K.J. Arrow", "L. Hurwicz", "H. Uzawa" ], "title": "Studies in linear and non-linear programming", "venue": null, "year": 1958 }, { "authors": [ "J.P. Bailey", "G. Piliouras" ], "title": "Multiplicative weights update in zero-sum games", "venue": "In Proceedings of the 2018 ACM Conference on Economics and Computation,", "year": 2018 }, { "authors": [ "J.P. Bailey", "G. Gidel", "G. Piliouras" ], "title": "Finite regret and cycles with fixed step-size via alternating gradient descent-ascent", "venue": "arXiv preprint arXiv:1907.04392,", "year": 2019 }, { "authors": [ "R.E. Bruck" ], "title": "On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space", "venue": "Journal of Mathematical Analysis and Applications,", "year": 1977 }, { "authors": [ "Y. Carmon", "Y. Jin", "A. Sidford", "K. Tian" ], "title": "Variance reduction for matrix games", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "X. Chen", "X. Deng", "S.-H. Teng" ], "title": "Settling the complexity of computing two-player Nash equilibria", "venue": "Journal of the ACM,", "year": 2009 }, { "authors": [ "S.S. Cheng", "S.S. Chiou" ], "title": "Exact stability regions for quartic polynomials", "venue": "Bulletin of the Brazilian Mathematical Society,", "year": 2007 }, { "authors": [ "B. Dai", "A. Shaw", "L. Li", "L. Xiao", "N. He", "Z. Liu", "J. Chen", "L. Song" ], "title": "Sbeed: Convergent reinforcement learning with nonlinear function approximation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "C. Daskalakis", "I. Panageas" ], "title": "Last-iterate convergence: Zero-sum games and constrained min-max optimization", "venue": "In Innovations in Theoretical Computer Science,", "year": 2019 }, { "authors": [ "C. Daskalakis", "A. Ilyas", "V. Syrgkanis", "H. Zeng" ], "title": "Training GANs with optimism", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "A. Deligkas", "J. Fearnley", "R. Savani", "P. Spirakis" ], "title": "Computing approximate Nash equilibria in polymatrix", "venue": "games. Algorithmica,", "year": 2017 }, { "authors": [ "V.F. Dem’yanov", "A.B. Pevnyi" ], "title": "Numerical methods for finding saddle points", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1972 }, { "authors": [ "S.S. Du", "J. Chen", "L. Li", "L. Xiao", "D. Zhou" ], "title": "Stochastic variance reduction methods for policy evaluation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Y. Freund", "R.E. Schapire" ], "title": "Adaptive game playing using multiplicative weights", "venue": "Games and Economic Behavior,", "year": 1999 }, { "authors": [ "G. Gidel", "H. Berard", "G. Vignoud", "P. Vincent", "S. Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "G. Gidel", "R.A. Hemmat", "M. Pezeshki", "G. Huang", "R. Lepriol", "S. Lacoste-Julien", "I. Mitliagkas" ], "title": "Negative momentum for improved game dynamics", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "I. Gohberg", "P. Lancaster", "L. Rodman" ], "title": "Matrix polynomials", "venue": null, "year": 1982 }, { "authors": [ "G. E" ], "title": "Gol’shtein. A generalized gradient method for finding saddlepoints", "venue": "Ekonomika i matematicheskie metody,", "year": 1972 }, { "authors": [ "I. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Y.-G. Hsieh", "F. Iutzeler", "J. Malick", "P. Mertikopoulos" ], "title": "On the convergence of single-call stochastic extra-gradient methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "M. G" ], "title": "Korpelevich. The extragradient method for finding saddle points and other problems", "venue": "Matecon, 12:747–756,", "year": 1976 }, { "authors": [ "T. Liang", "J. Stokes" ], "title": "Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "P.L. Lions" ], "title": "Une méthode itérative de résolution d’une inéquation variationnelle", "venue": "Israel Journal of Mathematics,", "year": 1978 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "M. Mansour" ], "title": "Discrete-time and sampled-data stability tests", "venue": "CRC press,", "year": 2011 }, { "authors": [ "B. Martinet" ], "title": "Régularisation d’inéquations variationnelles par approximations successives. ESAIM: Mathematical Modelling and Numerical Analysis: Modélisation Mathématique et", "venue": "Analyse Numérique,", "year": 1970 }, { "authors": [ "P. Mertikopoulos", "C. Papadimitriou", "G. Piliouras" ], "title": "Cycles in adversarial regularized learning", "venue": "In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2018 }, { "authors": [ "P. Mertikopoulos", "B. Lecouat", "H. Zenati", "C.-S. Foo", "V. Chandrasekhar", "G. Piliouras" ], "title": "Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "L. Mescheder", "S. Nowozin", "A. Geiger" ], "title": "The numerics of GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "L. Mescheder", "A. Geiger", "S. Nowozin" ], "title": "Which training methods for GANs do actually converge", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "A. Mokhtari", "A. Ozdaglar", "S. Pattathil" ], "title": "Proximal point approximations achieving a convergence rate of O(1/k) for smooth convex-concave saddle point problems: Optimistic gradient and extragradient methods", "venue": "arXiv preprint arXiv:1906.01115,", "year": 2019 }, { "authors": [ "A. Mokhtari", "A. Ozdaglar", "S. Pattathil" ], "title": "A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach", "venue": "arXiv preprint arXiv:1901.08511,", "year": 2019 }, { "authors": [ "R.D.C. Monteiro", "B.F. Svaiter" ], "title": "On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean", "venue": "SIAM Journal on Optimization,", "year": 2010 }, { "authors": [ "V. Nagarajan", "J.Z. Kolter" ], "title": "Gradient descent GAN optimization is locally stable", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Nedić", "A. Ozdaglar" ], "title": "Subgradient methods for saddle-point problems", "venue": "Journal of optimization theory and applications,", "year": 2009 }, { "authors": [ "A. Nemirovski" ], "title": "Prox-method with rate of convergence O(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems", "venue": "SIAM Journal on Optimization,", "year": 2004 }, { "authors": [ "A.S. Nemirovski", "D.B. Yudin" ], "title": "Cesàro convergence of the gradient method of approximating saddle points of convex-concave functions", "venue": "Doklady Akademii Nauk,", "year": 1978 }, { "authors": [ "A.S. Nemirovski", "D.B. Yudin" ], "title": "Problem complexity and method efficiency in optimization", "venue": null, "year": 1983 }, { "authors": [ "Y. Nesterov" ], "title": "A method for unconstrained convex minimization problem with the rate of convergence O(1/k2)", "venue": "Doklady Akademii Nauk,", "year": 1983 }, { "authors": [ "W. Peng", "Y. Dai", "H. Zhang", "L. Cheng" ], "title": "Training GANs with centripetal acceleration", "venue": "arXiv preprint arXiv:1902.08949,", "year": 2019 }, { "authors": [ "B.T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "L.D. Popov" ], "title": "A modification of the Arrow–Hurwicz method for search of saddle points", "venue": "Mathematical Notes,", "year": 1980 }, { "authors": [ "R.T. Rockafellar" ], "title": "Monotone operators and the proximal point algorithm", "venue": "SIAM journal on control and optimization,", "year": 1976 }, { "authors": [ "Y. Saad" ], "title": "Iterative methods for sparse linear systems", "venue": "SIAM, 2nd edition,", "year": 2003 }, { "authors": [ "I. Schur" ], "title": "Über Potenzreihen, die im Innern des Einheitskreises beschränkt sind", "venue": "Journal für die reine und angewandte Mathematik,", "year": 1917 }, { "authors": [ "P. Stein", "R.L. Rosenberg" ], "title": "On the solution of linear simultaneous equations by iteration", "venue": "Journal of the London Mathematical Society,", "year": 1948 }, { "authors": [ "P. Tseng" ], "title": "On linear convergence of iterative methods for the variational inequality problem", "venue": "Journal of Computational and Applied Mathematics,", "year": 1995 }, { "authors": [], "title": "2018), we consider the following WGAN (Arjovsky et al., 2017): f(φ,θ) = min", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Min-max optimization has received significant attention recently due to the popularity of generative adversarial networks (GANs) (Goodfellow et al., 2014), adversarial training (Madry et al., 2018) and reinforcement learning (Du et al., 2017; Dai et al., 2018), just to name some examples. Formally, given a bivariate function f(x,y), we aim to find a saddle point (x∗,y∗) such that f(x∗,y) ≤ f(x∗,y∗) ≤ f(x,y∗), ∀x ∈ Rn, ∀y ∈ Rn. (1.1) Since the beginning of game theory, various algorithms have been proposed for finding saddle points (Arrow et al., 1958; Dem’yanov & Pevnyi, 1972; Gol’shtein, 1972; Korpelevich, 1976; Rockafellar, 1976; Bruck, 1977; Lions, 1978; Nemirovski & Yudin, 1983; Freund & Schapire, 1999). Due to its recent resurgence in ML, new algorithms specifically designed for training GANs were proposed (Daskalakis et al., 2018; Kingma & Ba, 2015; Gidel et al., 2019b; Mescheder et al., 2017). However, due to the inherent non-convexity in deep learning formulations, our current understanding of the convergence behaviour of new and classic gradient algorithms is still quite limited, and existing analysis mostly focused on bilinear games or strongly-convex-strongly-concave games (Tseng, 1995; Daskalakis et al., 2018; Gidel et al., 2019b; Liang & Stokes, 2019; Mokhtari et al., 2019b). Nonzero-sum bilinear games, on the other hand, are known to be PPAD-complete (Chen et al., 2009) (for finding approximate Nash equilibria, see e.g. Deligkas et al. (2017)).\nIn this work, we study bilinear zero-sum games as a first step towards understanding general min-max optimization, although our results apply to some simple GAN settings (Gidel et al., 2019a). It is well-known that certain gradient algorithms converge linearly on bilinear zero-sum games (Liang & Stokes, 2019; Mokhtari et al., 2019b; Rockafellar, 1976; Korpelevich, 1976). These iterative algorithms usually come with two versions: Jacobi style updates or Gauss–Seidel (GS) style. In a Jacobi style, we update the two sets of parameters (i.e., x and y) simultaneously whereas in a GS style we update them alternatingly (i.e., one after the other). Thus, Jacobi style updates are naturally amenable to parallelization while GS style updates have to be sequential, although the latter is usually found to converge faster (and more stable). In numerical linear algebra, the celebrated Stein–Rosenberg theorem (Stein & Rosenberg, 1948) formally proves that in solving certain linear systems, GS updates converge strictly faster than their Jacobi counterparts, and often with a larger set of convergent instances. However, this result does not readily apply to bilinear zero-sum games.\nOur main goal here is to answer the following questions about solving bilinear zero-sum games:\n• When exactly does a gradient-type algorithm converge?\nContributions We summarize our main results from §3 and §4 in Table 1 and 2 respectively, with supporting experiments given in §5. We use σ1 and σn to denote the largest and the smallest singular values of matrixE (see equation 2.1), and κ := σ1/σn denotes the condition number. The algorithms will be introduced in §2. Note that we generalize gradient-type algorithms but retain the same names. Table 1 shows that in most cases that we study, whenever Jacobi updates converge, the corresponding GS updates converge as well (usually with a faster rate), but the converse is not true (§3). This extends the well-known Stein–Rosenberg theorem to bilinear games. Furthermore, Table 2 tells us that by generalizing existing gradient algorithms, we can obtain faster convergence rates." }, { "heading": "2 PRELIMINARIES", "text": "In the study of GAN training, bilinear games are often regarded as an important simple example for theoretically analyzing and understanding new algorithms and techniques (e.g. Daskalakis et al., 2018; Gidel et al., 2019a;b; Liang & Stokes, 2019). It captures the difficulty in GAN training and can represent some simple GAN formulations (Arjovsky et al., 2017; Daskalakis et al., 2018; Gidel et al., 2019a; Mescheder et al., 2018). Mathematically, bilinear zero-sum games can be formulated as the following min-max problem:\nminx∈Rn maxy∈Rn x >Ey + b>x+ c>y. (2.1)\nThe set of all saddle points (see definition in eq. (1.1)) is:\n{(x,y) |Ey + b = 0, E>x+ c = 0}. (2.2)\nThroughout, for simplicity we assume E to be invertible, whereas the seemingly general case with non-invertible E is treated in Appendix G. The linear terms are not essential in our analysis and we take b = c = 0 throughout the paper1. In this case, the only saddle point is (0,0). For bilinear games, it is well-known that simultaneous gradient descent ascent does not converge (Nemirovski & Yudin, 1983) and other gradient-based algorithms tailored for min-max optimization have been proposed (Korpelevich, 1976; Daskalakis et al., 2018; Gidel et al., 2019a; Mescheder et al., 2017). These iterative algorithms all belong to the class of general linear dynamical systems (LDS, a.k.a.\n1If they are not zero, one can translate x and y to cancel the linear terms, see e.g. Gidel et al. (2019b).\nmatrix iterative processes). Using state augmentation z(t) := (x(t),y(t)) we define a general k-step LDS as follows:\nz(t) = ∑k i=1Aiz (t−i) + d, (2.3)\nwhere the matrices Ai and vector d depend on the gradient algorithm (examples can be found in Appendix C.1). Define the characteristic polynomial, withA0 = −I:\np(λ) := det( ∑k i=0Aiλ k−i). (2.4)\nThe following well-known result decides when such a k-step LDS converges for any initialization:\nTheorem 2.1 (e.g. Gohberg et al. (1982)). The LDS in eq. (2.3) converges for any initialization (z(0), . . . ,z(k−1)) iff the spectral radius r := max{|λ| : p(λ) = 0} < 1, in which case {z(t)} converges linearly with an (asymptotic) exponent r.\nTherefore, understanding the bilinear game dynamics reduces to spectral analysis. The (sufficient and necessary) convergence condition reduces to that all roots of p(λ) lie in the (open) unit disk, which can be conveniently analyzed through the celebrated Schur’s theorem (Schur, 1917):\nTheorem 2.2 (Schur (1917)). The roots of a real polynomial p(λ) = a0λn + a1λn−1 + · · ·+ an are within the (open) unit disk of the complex plane iff ∀k ∈ {1, 2, . . . , n}, det(PkP>k −Q>kQk) > 0, where Pk,Qk are k × k matrices defined as: [Pk]i,j = ai−j1i≥j , [Qk]i,j = an−i+j1i≤j .\nIn the theorem above, we denoted 1S as the indicator function of the event S, i.e. 1S = 1 if S holds and 1S = 0 otherwise. For a nice summary of related stability tests, see Mansour (2011). We therefore define Schur stable polynomials to be those polynomials whose roots all lie within the (open) unit disk of the complex plane. Schur’s theorem has the following corollary (proof included in Appendix B.2 for the sake of completeness):\nCorollary 2.1 (e.g. Mansour (2011)). A real quadratic polynomial λ2 + aλ + b is Schur stable iff b < 1, |a| < 1 + b; A real cubic polynomial λ3 + aλ2 + bλ + c is Schur stable iff |c| < 1, |a+ c| < 1 + b, b− ac < 1− c2; A real quartic polynomial λ4 + aλ3 + bλ2 + cλ+ d is Schur stable iff |c− ad| < 1− d2, |a+ c| < b+ d+ 1, and b < (1 + d) + (c− ad)(a− c)/(d− 1)2.\nLet us formally define Jacobi and GS updates: Jacobi updates take the form\nx(t) = T1(x (t−1),y(t−1), . . . ,x(t−k),y(t−k)), y(t) = T2(x (t−1),y(t−1), . . . ,x(t−k),y(t−k)),\nwhile Gauss–Seidel updates replace x(t−i) with the more recent x(t−i+1) in operator T2, where T1, T2 : Rnk × Rnk → Rn can be any update functions. For LDS updates in eq. (2.3) we find a nice relation between the characteristic polynomials of Jacobi and GS updates in Theorem 2.3 (proof in Appendix B.1), which turns out to greatly simplify our subsequent analyses:\nTheorem 2.3 (Jacobi vs. Gauss–Seidel). Let p(λ, γ) = det( ∑k i=0(γLi +Ui)λ\nk−i), whereAi = Li + Ui and Li is strictly lower block triangular. Then, the characteristic polynomial of Jacobi updates is p(λ, 1) while that of Gauss–Seidel updates is p(λ, λ).\nCompared to the Jacobi update, in some sense the Gauss–Seidel update amounts to shifting the strictly lower block triangular matrices Li one step to the left, as p(λ, λ) can be rewritten as det (∑k i=0(Li+1 +Ui)λ k−i )\n, with Lk+1 := 0. This observation will significantly simplify our comparison between Jacobi and Gauss–Seidel updates.\nNext, we define some popular gradient algorithms for finding saddle points in the min-max problem\nmin x max y f(x,y). (2.5)\nWe present the algorithms for a general (bivariate) function f although our main results will specialize f to the bilinear case in eq. (2.1). Note that we introduced more “step sizes” for our refined analysis, as we find that the enlarged parameter space often contains choices for faster linear convergence (see §4). We only define the Jacobi updates, while the GS counterparts can be easily inferred. We always use α1 and α2 to define step sizes (or learning rates) which are positive.\nGradient descent (GD) The generalized GD update has the following form: x(t+1) = x(t) − α1∇xf(x(t),y(t)), y(t+1) = y(t) + α2∇yf(x(t),y(t)). (2.6)\nWhen α1 = α2, the convergence of averaged iterates (a.k.a. Cesari convergence) for convex-concave games is analyzed in (Bruck, 1977; Nemirovski & Yudin, 1978; Nedić & Ozdaglar, 2009). Recent progress on interpreting GD with dynamical systems can be seen in, e.g., Mertikopoulos et al. (2018); Bailey et al. (2019); Bailey & Piliouras (2018).\nExtra-gradient (EG) We study a generalized version of EG, defined as follows: x(t+1/2) = x(t) − γ1∇xf(x(t),y(t)), y(t+1/2) = y(t) + γ2∇yf(x(t),y(t)); (2.7) x(t+1) = x(t) − α1∇xf(x(t+1/2),y(t+1/2)), y(t+1) = y(t) + α2∇yf(x(t+1/2),y(t+1/2)). (2.8)\nEG was first proposed in Korpelevich (1976) with the restriction α1 = α2 = γ1 = γ2, under which linear convergence was proved for bilinear games. Convergence of EG on convex-concave games was analyzed in Nemirovski (2004); Monteiro & Svaiter (2010), and Mertikopoulos et al. (2019) provides convergence guarantees for specific non-convex-non-concave problems. For bilinear games, a slightly more generalized version was proposed in Liang & Stokes (2019) where α1 = α2, γ1 = γ2, with linear convergence proved. For later convenience we define β1 = α2γ1 and β2 = α1γ2.\nOptimistic gradient descent (OGD) We study a generalized version of OGD, defined as follows: x(t+1) = x(t) − α1∇xf(x(t),y(t)) + β1∇xf(x(t−1),y(t−1)), (2.9) y(t+1) = y(t) + α2∇yf(x(t),y(t))− β2∇yf(x(t−1),y(t−1)). (2.10)\nThe original version of OGD was given in Popov (1980) with α1 = α2 = 2β1 = 2β2 and rediscovered in the GAN literature (Daskalakis et al., 2018). Its linear convergence for bilinear games was proved in Liang & Stokes (2019). A slightly more generalized version with α1 = α2 and β1 = β2 was analyzed in Peng et al. (2019); Mokhtari et al. (2019b), again with linear convergence proved. The stochastic case was analyzed in Hsieh et al. (2019).\nMomentum method Generalized heavy ball method was analyzed in Gidel et al. (2019b): x(t+1) = x(t) − α1∇xf(x(t),y(t)) + β1(x(t) − x(t−1)), (2.11) y(t+1) = y(t) + α2∇yf(x(t),y(t)) + β2(y(t) − y(t−1)). (2.12)\nThis is a modification of Polyak’s heavy ball (HB) (Polyak, 1964), which also motivated Nesterov’s accelerated gradient algorithm (NAG) (Nesterov, 1983). Note that for both x-update and the y-update, we add a scale multiple of the successive difference (e.g. proxy of the momentum). For this algorithm our result below improves those obtained in Gidel et al. (2019b), as will be discussed in §3.\nEG and OGD as approximations of proximal point algorithm It has been observed recently in Mokhtari et al. (2019b) that for convex-concave games, EG (α1 = α2 = γ1 = γ2 = η) and OGD (α1/2 = α2/2 = β1 = β2 = η) can be treated as approximations of the proximal point algorithm (Martinet, 1970; Rockafellar, 1976) when η is small. With this result, one can show that EG and OGD converge to saddle points sublinearly for smooth convex-concave games (Mokhtari et al., 2019a). We give a brief introduction of the proximal point algorithm in Appendix A (including a linear convergence result for the slightly generalized version).\nThe above algorithms, when specialized to a bilinear function f (see eq. (2.1)), can be rewritten as a 1-step or 2-step LDS (see. eq. (2.3)). See Appendix C.1 for details." }, { "heading": "3 EXACT CONDITIONS", "text": "With tools from §2, we formulate necessary and sufficient conditions under which a gradient-based algorithm converges for bilinear games. We sometimes use “J” as a shorthand for Jacobi style updates and “GS” for Gauss–Seidel style updates. For each algorithm, we first write down the characteristic polynomials (see derivation in Appendix C.1) for both Jacobi and GS updates, and present the exact conditions for convergence. Specifically, we show that in many cases the GS convergence regions strictly include the Jacobi convergence regions. The proofs for Theorem 3.1, 3.2, 3.3 and 3.4 can be found in Appendix C.2, C.3, C.4, and C.5, respectively.\nGD The characteristic equations can be computed as:\nJ: (λ− 1)2 + α1α2σ2 = 0, GS: (λ− 1)2 + α1α2σ2λ = 0. (3.1)\nScaling symmetry From section 3 we obtain a scaling symmetry (α1, α2) → (tα1, α2/t), with t > 0. With this symmetry we can always fix α1 = α2 = α. This symmetry also holds for EG and momentum. For OGD, the scaling symmetry is slightly different with (α1, β1, α2, β2) → (tα1, tβ1, α2/t, β2/t), but we can still use this symmetry to fix α1 = α2 = α.\nTheorem 3.1 (GD). Jacobi GD and Gauss–Seidel GD do not converge. However, Gauss–Seidel GD can have a limit cycle while Jacobi GD always diverges.\nIn the constrained case, Mertikopoulos et al. (2018) and Bailey & Piliouras (2018) show that FTRL, a more generalized algorithm of GD, does not converge for polymatrix games. When α1 = α2, the result of Gauss–Seidel GD has been shown in Bailey et al. (2019).\nEG The characteristic equations can be computed as:\nJ: (λ− 1)2 + (β1 + β2)σ2(λ− 1) + (α1α2σ2 + β1β2σ4) = 0, (3.2) GS: (λ− 1)2 + (α1α2 + β1 + β2)σ2(λ− 1) + (α1α2σ2 + β1β2σ4) = 0. (3.3)\nTheorem 3.2 (EG). For generalized EG with α1 = α2 = α and γi = βi/α, Jacobi and Gauss–Seidel updates achieve linear convergence iff for any singular value σ of E, we have:\nJ : |β1σ2 + β2σ2 − 2| < 1 + (1− β1σ2)(1− β2σ2) + α2σ2, (1− β1σ2)(1− β2σ2) + α2σ2 < 1, (3.4) GS : |(β1 + β2 + α2)σ2 − 2| < 1 + (1− β1σ2)(1− β2σ2), (1− β1σ2)(1− β2σ2) < 1. (3.5)\nIf β1 +β2 +α2 < 2/σ21 , the convergence region of GS updates strictly include that of Jacobi updates.\nOGD The characteristic equations can be computed as:\nJ: λ2(λ− 1)2 + (λα1 − β1)(λα2 − β2)σ2 = 0, (3.6) GS: λ2(λ− 1)2 + (λα1 − β1)(λα2 − β2)λσ2 = 0. (3.7)\nTheorem 3.3 (OGD). For generalized OGD with α1 = α2 = α, Jacobi and Gauss–Seidel updates achieve linear convergence iff for any singular value σ of E, we have:\nJ : { |β1β2σ2| < 1, (α− β1)(α− β2) > 0, 4 + (α+ β1)(α+ β2)σ2 > 0, α2 ( β21σ 2 + 1 ) ( β22σ 2 + 1 ) < (β1β2σ 2 + 1)(2α(β1 + β2) + β1β2(β1β2σ 2 − 3)); (3.8)\nGS :\n{ (α− β1)(α− β2) > 0, (α+ β1)(α+ β2)σ2 < 4, (αβ1σ 2 + 1)(αβ2σ 2 + 1) > (1 + β1β2σ 2)2. (3.9)\nThe convergence region of GS updates strictly include that of Jacobi updates.\nMomentum The characteristic equations can be computed as:\nJ: (λ− 1)2(λ− β1)(λ− β2) + α1α2σ2λ2 = 0, (3.10) GS: (λ− 1)2(λ− β1)(λ− β2) + α1α2σ2λ3 = 0. (3.11)\nTheorem 3.4 (momentum). For the generalized momentum method with α1 = α2 = α, the Jacobi updates never converge, while the GS updates converge iff for any singular value σ of E, we have:\n|β1β2| < 1, | − α2σ2 + β1 + β2 + 2| < β1β2 + 3, 4(β1 + 1)(β2 + 1) > α2σ2, α2σ2β1β2 < (1− β1β2)(2β1β2 − β1 − β2). (3.12)\nThis condition implies that at least one of β1, β2 is negative.\nPrior to our work, only sufficient conditions for linear convergence were given for the usual EG and OGD; see §2 above. For the momentum method, our result improves upon Gidel et al. (2019b) where they only considered specific cases of parameters. For example, they only considered β1 = β2 ≥ −1/16 for Jacobi momentum (but with explicit rate of divergence), and β1 = −1/2, β2 = 0 for GS momentum (with convergence rate). Our Theorem 3.4 gives a more complete picture and formally justifies the necessity of negative momentum.\nIn the theorems above, we used the term “convergence region” to denote a subset of the parameter space (with parameters α, β or γ) where the algorithm converges. Our result shares similarity with the celebrated Stein–Rosenberg theorem (Stein & Rosenberg, 1948), which only applies to solving linear systems with non-negative matrices (if one were to apply it to our case, the matrix S in eq. (F.1) in Appendix F needs to have non-zero diagonal entries, which is not possible). In this sense, our results extend the Stein–Rosenberg theorem to cover nontrivial bilinear games." }, { "heading": "4 OPTIMAL EXPONENTS OF LINEAR CONVERGENCE", "text": "In this section we study the optimal convergence rates of EG and OGD. We define the exponent of linear convergence as r = limt→∞ ||z(t)||/||z(t−1)|| which is the same as the spectral radius. For ease of presentation we fix α1 = α2 = α > 0 (using scaling symmetry) and we use r∗ to denote the optimal exponent of linear convergence (achieved by tuning the parameters α, β, γ). Our results show that by generalizing gradient algorithms one can obtain better convergence rates.\nTheorem 4.1 (EG optimal). Both Jacobi and GS EG achieve the optimal exponent of linear convergence r∗ = (κ2 − 1)/(κ2 + 1) at α→ 0 and β1 = β2 = 2/(σ21 + σ2n). As κ→∞, r∗ → 1− 2/κ2.\nNote that we defined βi = γiα in Section 2. In other words, we are taking very large extra-gradient steps (γi →∞) and very small gradient steps (α→ 0). Theorem 4.2 (OGD optimal). For Jacobi OGD with β1 = β2 = β, to achieve the optimal exponent of linear convergence, we must have α ≤ 2β. For the original OGD with α = 2β, the optimal exponent of linear convergence r∗ satisfies\nr2∗ = 1\n2 +\n1\n4 √\n2σ21\n√ (σ21 − σ2n)(5σ21 − σ2n + √ (σ21 − σ2n)(9σ21 − σ2n)), at (4.1)\nβ∗ = 1\n4 √ 2\n√ 3σ41 − (σ21 − σ2n)3/2 √ 9σ21 − σ2n + 6σ21σ2n − σ4n σ41σ 2 n . (4.2)\nIf κ → ∞, r∗ ∼ 1 − 1/(6κ2). For GS OGD with β2 = 0, the optimal exponent of convergence is r∗ = √ (κ2 − 1)/(κ2 + 1), at α = √ 2/σ1 and β1 = √ 2σ1/(σ 2 1 + σ 2 n). If κ→∞, r∗ ∼ 1− 1/κ2.\nRemark The original OGD (Popov, 1980; Daskalakis et al., 2018) with α = 2β may not always be optimal. For example, take one-dimensional bilinear game and σ = 1, and denote the spectral radius given α, β as r(α, β). If we fix α = 1/2, by numerically solving section 3 we have\nr(1/2, 1/4) ≈ 0.966, r(1/2, 1/3) ≈ 0.956, (4.3)\ni.e, α = 1/2, β = 1/3 is a better choice than α = 2β = 1/2.\nNumerical method We provide a numerical method for finding the optimal exponent of linear convergence, by realizing that the unit disk in Theorem 2.2 is not special. Let us call a polynomial to be r-Schur stable if all of its roots lie within an (open) disk of radius r in the complex plane. We can scale the polynomial with the following lemma:\nLemma 4.1. A polynomial p(λ) is r-Schur stable iff p(rλ) is Schur stable.\nWith the lemma above, one can rescale the Schur conditions and find the convergence region where the exponent of linear convergence is at most r (r < 1). A simple binary search would allow one to find a better and better convergence region. See details in Appendix D.3." }, { "heading": "5 EXPERIMENTS", "text": "Bilinear game We run experiments on a simple bilinear game and choose the optimal parameters as suggested in Theorem 4.1 and 4.2. The results are shown in the left panel of Figure 1, which confirms the predicted linear rates.\nDensity plots We show the density plots (heat maps) of the spectral radii in Figure 2. We make plots for EG, OGD and momentum with both Jacobi and GS updates. These plots are made when β1 = β2 = β and they agree with our theorems in §3.\nWasserstein GAN As in Daskalakis et al. (2018), we consider a WGAN (Arjovsky et al., 2017) that learns the mean of a Gaussian:\nminφmaxθ f(φ,θ) := Ex∼N (v,σ2I)[s(θ >x)]− Ez∼N (0,σ2I)[s(θ>(z + φ))], (5.1)\nwhere s(x) is the sigmoid function. It can be shown that near the saddle point (θ∗,φ∗) = (0,v) the min-max optimization can be treated as a bilinear game (Appendix E.1). With GS updates, we find that Adam diverges, SGD goes around a limit cycle, and EG converges, as shown in the middle panel of Figure 1. We can see that Adam does not behave well even in this simple task of learning a single two-dimensional Gaussian with GAN.\nOur next experiment shows that generalized algorithms may have an advantage over traditional ones. Inspired by Theorem 4.1, we compare the convergence of two EGs with the same parameter β = αγ, and find that with scaling, EG has better convergence, as shown in the right panel of Figure 1. Finally,\nwe compare Jacobi updates with GS updates. In Figure 3, we can see that GS updates converge even if the corresponding Jacobi updates do not.\nMixtures of Gaussians (GMMs) Our last experiment is on learning GMMs with a vanilla GAN (Goodfellow et al., 2014) that does not directly fall into our analysis. We choose a 3-hidden layer ReLU network for both the generator and the discriminator, and each hidden layer has 256 units. We find that for GD and OGD, Jacobi style updates converge more slowly than GS updates, and whenever Jacobi updates converge, the corresponding GS updates converges as well. These comparisons can be found in Figure 4 and 5, which implies the possibility of extending our results to non-bilinear games. Interestingly, we observe that even Jacobi GD converges on this example. We provide additional comparison between the Jacobi and GS updates of Adam (Kingma & Ba, 2015) in Appendix E.2." }, { "heading": "6 CONCLUSIONS", "text": "In this work we focus on the convergence behaviour of gradient-based algorithms for solving bilinear games. By drawing a connection to discrete linear dynamical systems (§2) and using Schur’s theorem, we provide necessary and sufficient conditions for a variety of gradient algorithms, for both simultaneous (Jacobi) and alternating (Gauss–Seidel) updates. Our results show that Gauss–Seidel updates converge more easily than Jacobi updates. Furthermore, we find the optimal exponents of linear convergence for EG and OGD, and provide a numerical method for searching that exponent. We performed a number of experiments to validate our theoretical findings and suggest further analysis.\nThere are many future directions to explore. For example, our preliminary experiments on GANs suggest that similar (local) results might be obtained for more general games. Indeed, the local convergence behaviour of min-max nonlinear optimization can be studied through analyzing the spectrum of the Jacobian matrix of the update operator (see, e.g., Nagarajan & Kolter (2017); Gidel et al. (2019b)). We believe our framework that draws the connection to linear discrete dynamic systems and Schur’s theorem is a powerful machinery that can be applied in such problems and beyond. It would be interesting to generalize our results to the constrained case (even for bilinear games), as studied in Daskalakis & Panageas (2019); Carmon et al. (2019). Extending our results to account for stochastic noise (as empirically tested in our experiments) is another interesting direction, with results in Gidel et al. (2019a); Hsieh et al. (2019)." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Argyrios Deligkas, Sarath Pattathil and Georgios Piliouras for pointing out several related references. GZ is supported by David R. Cheriton Scholarship. We gratefully acknowledge funding support from NSERC and the Waterloo-Huawei Joint Innovation Lab." }, { "heading": "A PROXIMAL POINT (PP) ALGORITHM", "text": "PP was originally proposed by Martinet (1970) with α1 = α2 and then carefully studied by Rockafellar (1976). The linear convergence for bilinear games was also proved in the same reference. Note that we do not consider Gauss–Seidel PP since we do not get a meaningful solution after a shift of steps2.\nx(t+1) = x(t) − α1∇xf(x(t+1),y(t+1)), y(t+1) = y(t) + α2∇yf(x(t+1),y(t+1)), (A.1)\nwhere x(t+1) and y(t+1) are given implicitly by solving the equations above. For bilinear games, one can derive that:\nz(t+1) =\n[ I α1E\n−α2E> I\n]−1 z(t). (A.2)\nWe can compute the exact form of the inverse matrix, but perhaps an easier way is just to compute the spectrum of the original matrix (the same as Jacobi GD except that we flip the signs of αi) and perform λ→ 1/λ. Using the fact that the eigenvalues of a matrix are reciprocals of the eigenvalues of its inverse, the characteristic equation is:\n(1/λ− 1)2 + α1α2σ2 = 0. (A.3)\nWith the scaling symmetry (α1, α2) → (tα1, α2/t), we can take α1 = α2 = α > 0. With the notations in Corollary 2.1, we have a = −2/(1 + α2σ2) and b = 1/(1 + α2σ2), and it is easy to check |a| < 1 + b and b < 1 are always satisfied, which means linear convergence is always guaranteed. Hence, we have the following theorem: Theorem A.1. For bilinear games, the proximal point algorithm always converges linearly.\nAlthough the proximal point algorithm behaves well, it is rarely used in practice since it is an implicit method, i.e., one needs to solve (x(t+1),y(t+1)) from equation A.1." }, { "heading": "B PROOFS IN SECTION 2", "text": "" }, { "heading": "B.1 PROOF OF THEOREM 2.3", "text": "In this section we apply Theorem 2.1 to prove Theorem 2.3, an interesting connection between Jacobi and Gauss–Seidel updates:\nTheorem 2.3 (Jacobi vs. Gauss–Seidel). Let p(λ, γ) = det( ∑k i=0(γLi +Ui)λ\nk−i), whereAi = Li + Ui and Li is strictly lower block triangular. Then, the characteristic polynomial of Jacobi updates is p(λ, 1) while that of Gauss–Seidel updates is p(λ, λ).\nLet us first consider the block linear iterative process in the sense of Jacobi (i.e., all blocks are updated simultaneously):\nz(t) = z (t) 1 ... z (t) b = k∑ i=1 Ai z (t−i) 1 ... z (t−i) b = k∑ i=1 l−1∑ j=1 Ai,jz (t−i) j + b∑ j=l Ai,jz (t−i) j + d, (B.1) whereAi,j is the j-th column block ofAi. For each matrixAi, we decompose it into the sum\nAi = Li +Ui, (B.2)\nwhere Li is the strictly lower block triangular part and Ui is the upper (including diagonal) block triangular part. Theorem 2.1 indicates that the convergence behaviour of equation B.1 is governed by the largest modulus of the roots of the characteristic polynomial:\ndet ( −λkI +\nk∑ i=1 Aiλ k−i\n) = det ( −λkI +\nk∑ i=1 (Li +Ui)λ k−i\n) . (B.3)\n2If one uses inverse operators this is in principle doable.\nAlternatively, we can also consider the updates in the sense of Gauss–Seidel (i.e., blocks are updated sequentially):\nz (t) l = k∑ i=1 l−1∑ j=1 Ai,jz (t−i+1) j + b∑ j=l Ai,jz (t−i) j l + dl, l = 1, . . . , b. (B.4)\nWe can rewrite the Gauss–Seidel update elegantly3 as:\n(I −L1)z(t) = k∑ i=1 (Li+1 +Ui)z (t−i) + d, (B.5)\ni.e.,\nz(t) = k∑ i=1 (I −L1)−1(Li+1 +Ui)z(t−i) + (I −L1)−1d, (B.6)\nwhere Lk+1 := 0. Applying Theorem 2.1 again we know the convergence behaviour of the Gauss– Seidel update is governed by the largest modulus of roots of the characteristic polynomial:\ndet ( −λkI +\nk∑ i=1\n(I −L1)−1(Li+1 +Ui)λk−i )\n(B.7)\n= det ( (I −L1)−1 ( − λkI + λkL1 +\nk∑ i=1\n(Li+1 +Ui)λ k−i ))\n(B.8)\n= det(I −L1)−1 · det ( k∑ i=0 (λLi +Ui)λ k−i ) (B.9)\nNote thatA0 = −I and the factor det(I −L1)−1 can be discarded since multiplying a characteristic polynomial by a non-zero constant factor does not change its roots." }, { "heading": "B.2 PROOF OF COROLLARY 2.1", "text": "Corollary 2.1 (e.g. Mansour (2011)). A real quadratic polynomial λ2 + aλ + b is Schur stable iff b < 1, |a| < 1 + b; A real cubic polynomial λ3 + aλ2 + bλ + c is Schur stable iff |c| < 1, |a+ c| < 1 + b, b− ac < 1− c2; A real quartic polynomial λ4 + aλ3 + bλ2 + cλ+ d is Schur stable iff |c− ad| < 1− d2, |a+ c| < b+ d+ 1, and b < (1 + d) + (c− ad)(a− c)/(d− 1)2.\nProof. It suffices to prove the result for quartic polynomials. We write down the matrices:\nP1 = [1], Q1 = [d], (B.10)\nP2 = [ 1 0 a 1 ] , Q2 = [ d c 0 d ] , (B.11)\nP3 = [ 1 0 0 a 1 0 b a 1 ] ,Q3 = [ d c b 0 d c 0 0 d ] , (B.12)\nP4 = 1 0 0 0a 1 0 0b a 1 0 c b a 0 ,Q4 = d c b a0 d c b0 0 d c 0 0 0 d . (B.13) We require det(PkP>k − Q>kQk) =: δk > 0, for k = 1, 2, 3, 4. If k = 1, we have 1 − d2 > 0, namely, |d| < 1. δ2 > 0 reduces to (c − ad)2 < (1 − d2)2 and thus |c − ad| < 1 − d2 due to the first condition. δ4 > 0 simplifies to:\n−((a+ c)2 − (b+ d+ 1)2)((b− d− 1)(d− 1)2 − (a− c)(c− ad))2 < 0, (B.14) 3This is well-known when k = 1, see e.g. Saad (2003).\nwhich yields |a+ c| < |b+ d+ 1|. Finally, δ3 > 0 reduces to:\n((b− d− 1)(d− 1)2 − (a− c)(c− ad))((d2 − 1)(b+ d+ 1) + (c− ad)(a+ c)) > 0. (B.15)\nDenote p(λ) := λ4 + aλ3 + bλ2 + cλ + d, we must have p(1) > 0 and p(−1) > 0, as otherwise there is a real root λ0 with |λ0| ≥ 1. Hence we obtain b + d + 1 > |a + c| > 0. Also, from |c− ad| < 1− d2, we know that:\n|c− ad| · |a+ c| < |b+ d+ 1|(1− d2) = (b+ d+ 1)(1− d2). (B.16)\nSo, the second factor in B.15 is negative and the positivity of the first factor reduces to:\nb < (1 + d) + (c− ad)(a− c)\n(d− 1)2 . (B.17)\nTo obtain the Schur condition for cubic polynomials, we take d = 0, and the quartic Schur condition becomes:\n|c| < 1, |a+ c| < b+ 1, b− ac < 1− c2. (B.18)\nTo obtain the Schur condition for quadratic polynomials, we take c = 0 in the above and write:\nb < 1, |a| < 1 + b. (B.19)\nThe proof is now complete." }, { "heading": "C PROOFS IN SECTION 3", "text": "Some of the following proofs in Appendix C.4 and C.5 rely on Mathematica code (mostly with the built-in function Reduce) but in principle the code can be verified manually using cylindrical algebraic decomposition.4" }, { "heading": "C.1 DERIVATION OF CHARACTERISTIC POLYNOMIALS", "text": "In this appendix, we derive the exact forms of LDSs (eq. (2.3)) and the characteristic polynomials for all gradient-based methods introduced in §2, with eq. (2.4). The following lemma is well-known and easy to verify using Schur’s complement:\nLemma C.1. GivenM ∈ R2n×2n,A ∈ Rn×n and\nM = [ A B C D ] . (C.1)\nIf C andD commute, then detM = det(AD −BC).\nGradient descent From equation 2.6 the update equation of Jacobi GD can be derived as:\nz(t+1) =\n[ I −α1E\nα2E > I\n] z(t), (C.2)\nand with Lemma C.1, we compute the characteristic polynomial as in eq. (2.4):\ndet [ (λ− 1)I α1E −α2E> (λ− 1)I ] = det[(λ− 1)2I + α1α2EE>], (C.3)\nWith spectral decomposition we obtain equation 3.1. Taking α2 → λα2 and with Theorem 2.3 we obtain the corresponding GS updates. Therefore, the characteristic polynomials for GD are:\nJ: (λ− 1)2 + α1α2σ2 = 0, GS: (λ− 1)2 + α1α2σ2λ = 0. (C.4) 4See the online Mathematica documentation.\nExtra-gradient From eq. (2.7) and eq. (2.8), the update of Jacobi EG is:\nz(t+1) = [ I − β2EE> −α1E α2E > I − β1E>E ] z(t), (C.5)\nthe characteristic polynomial is:\ndet\n[ (λ− 1)I + β2EE> α1E\n−α2E> (λ− 1)I + β1E>E\n] . (C.6)\nSince we assumed α2 > 0, we can left multiply the second row by β2E/α2 and add it to the first row. Hence, we obtain:\ndet [ (λ− 1)I α1E + (λ− 1)β2E/α2 + β1β2EE>E/α2 −α2E> (λ− 1)I + β1E>E ] . (C.7)\nWith Lemma C.1 the equation above becomes:\ndet[(λ− 1)2I + (β1 + β2)E>E(λ− 1) + (α1α2E>E + β1β2E>EE>E)], (C.8) which simplifies to equation 3.2 with spectral decomposition. Note that to obtain the GS polynomial, we simply take α2 → λα2 in the Jacobi polynomial as shown in Theorem 2.3. For the ease of reading we copy the characteristic equations for generalized EG:\nJ: (λ− 1)2 + (β1 + β2)σ2(λ− 1) + (α1α2σ2 + β1β2σ4) = 0, (C.9) GS: (λ− 1)2 + (α1α2 + β1 + β2)σ2(λ− 1) + (α1α2σ2 + β1β2σ4) = 0. (C.10)\nOptimistic gradient descent We can compute the LDS for OGD with eq. (2.9) and eq. (2.10):\nz(t+2) =\n[ I −α1E\nα2E > I\n] z(t+1) + [ 0 β1E\n−β2E> 0\n] z(t), (C.11)\nWith eq. (2.4), the characteristic polynomial for Jacobi OGD is\ndet\n[ (λ2 − λ)I (λα1 − β1)E\n(−λα2 + β2)E> (λ2 − λ)I\n] . (C.12)\nTaking the determinant and with Lemma C.1 we obtain equation 3.6. The characteristic polynomial for GS updates in equation 3.7 can be subsequently derived with Theorem 2.3, by taking (α2, β2)→ (λα2, λβ2). For the ease of reading we copy the characteristic polynomials from the main text as:\nJ: λ2(λ− 1)2 + (λα1 − β1)(λα2 − β2)σ2 = 0, (C.13) GS: λ2(λ− 1)2 + (λα1 − β1)(λα2 − β2)λσ2 = 0. (C.14)\nMomentum method With eq. (2.11) and eq. (2.12), the LDS for the momentum method is:\nz(t+2) = [ (1 + β1)I −α1E α2E > (1 + β2)I ] z(t+1) + [ −β1I 0 0 −β2I ] z(t), (C.15)\nFrom eq. (2.4), the characteristic polynomial for Jacobi momentum is\ndet\n[ (λ2 − λ(1 + β1) + β1)I λα1E\n−λα2E> (λ2 − λ(1 + β2) + β2)I\n] . (C.16)\nTaking the determinant and with Lemma C.1 we obtain equation 3.10, while equation 3.11 can be derived with Theorem 2.3, by taking α2 → λα2. For the ease of reading we copy the characteristic polynomials from the main text as:\nJ: (λ− 1)2(λ− β1)(λ− β2) + α1α2σ2λ2 = 0, (C.17) GS: (λ− 1)2(λ− β1)(λ− β2) + α1α2σ2λ3 = 0. (C.18)" }, { "heading": "C.2 PROOF OF THEOREM 3.1: SCHUR CONDITIONS OF GD", "text": "Theorem 3.1 (GD). Jacobi GD and Gauss–Seidel GD do not converge. However, Gauss–Seidel GD can have a limit cycle while Jacobi GD always diverges.\nProof. With the notations in Corollary 2.1, for Jacobi GD, b = 1 + α2σ2 > 1. For Gauss–Seidel GD, b = 1. The Schur conditions are violated." }, { "heading": "C.3 PROOF OF THEOREM 3.2: SCHUR CONDITIONS OF EG", "text": "Theorem 3.2 (EG). For generalized EG with α1 = α2 = α and γi = βi/α, Jacobi and Gauss–Seidel updates achieve linear convergence iff for any singular value σ of E, we have:\nJ : |β1σ2 + β2σ2 − 2| < 1 + (1− β1σ2)(1− β2σ2) + α2σ2, (1− β1σ2)(1− β2σ2) + α2σ2 < 1, (3.4) GS : |(β1 + β2 + α2)σ2 − 2| < 1 + (1− β1σ2)(1− β2σ2), (1− β1σ2)(1− β2σ2) < 1. (3.5)\nIf β1 +β2 +α2 < 2/σ21 , the convergence region of GS updates strictly include that of Jacobi updates.\nBoth characteristic polynomials can be written as a quadratic polynomial λ2 + aλ+ b, where:\nJ: a = (β1 + β2)σ2 − 2, b = (1− β1σ2)(1− β2σ2) + α2σ2, (C.19) GS: a = (β1 + β2 + α2)σ2 − 2, b = (1− β1σ2)(1− β2σ2). (C.20)\nCompared to Jacobi EG, the only difference between Gauss–Seidel and Jacobi updates is that the α2σ2 in b is now in a, which agrees with Theorem 2.3. Using Corollary 2.1, we can derive the Schur conditions equation 3.4 and equation 3.5.\nMore can be said if β1 +β2 is small. For instance, if β1 +β2 +α2 < 2/σ21 , then equation 3.4 implies equation 3.5. In this case, the first conditions of equation 3.4 and equation 3.5 are equivalent, while the second condition of equation 3.4 strictly implies that of equation 3.5. Hence, the Schur region of Gauss–Seidel updates includes that of Jacobi updates. The same holds true if β1 + β2 < 43σ21 .\nMore precisely, to show that the GS convergence region strictly contains that of the Jacobi convergence region, simply take β1 = β2 = β. The Schur condition for Jacobi EG and Gauss–Seidel EG are separately:\nJ: α2σ2 + (βσ2 − 1)2 < 1, (C.21) GS: 0 < βσ2 < 2 and |ασ| < 2− βσ2. (C.22)\nIt can be shown that if β = α2/3 and α→ 0, equation C.21 is always violated whereas equation C.22 is always satisfied.\nConversely, we give an example when Jacobi EG converges while GS EG does not. Let β1σ2 = β2σ 2 ≡ 32 , then Jacobi EG converges iff α 2σ2 < 34 while GS EG converges iff α 2σ2 < 14 ." }, { "heading": "C.4 PROOF OF THEOREM 3.3: SCHUR CONDITIONS OF OGD", "text": "In this subsection, we fill in the details of the proof of Theorem 3.3, by first deriving the Schur conditions of OGD, and then studying the relation between Jacobi OGD and GS OGD. Theorem 3.3 (OGD). For generalized OGD with α1 = α2 = α, Jacobi and Gauss–Seidel updates achieve linear convergence iff for any singular value σ of E, we have:\nJ : { |β1β2σ2| < 1, (α− β1)(α− β2) > 0, 4 + (α+ β1)(α+ β2)σ2 > 0, α2 ( β21σ 2 + 1 ) ( β22σ 2 + 1 ) < (β1β2σ 2 + 1)(2α(β1 + β2) + β1β2(β1β2σ 2 − 3)); (3.8)\nGS :\n{ (α− β1)(α− β2) > 0, (α+ β1)(α+ β2)σ2 < 4, (αβ1σ 2 + 1)(αβ2σ 2 + 1) > (1 + β1β2σ 2)2. (3.9)\nThe convergence region of GS updates strictly include that of Jacobi updates.\nThe Jacobi characteristic polynomial is now quartic in the form λ4 + aλ3 + bλ2 + cλ+ d, with\na = −2, b = α2σ2 + 1, c = −α(β1 + β2)σ2, d = β1β2σ2. (C.23) Comparably, the GS polynomial equation 3.7 can be reduced to a cubic one λ3 + aλ2 + bλ+ c with\na = −2 + α2σ2, b = −α(β1 + β2)σ2 + 1, c = β1β2σ2. (C.24)\nFirst we derive the Schur conditions equation 3.8 and equation 3.9. Note that other than Corollary 2.1, an equivalent Schur condition can be read from Cheng & Chiou (2007, Theorem 1) as:\nTheorem C.1 (Cheng & Chiou (2007)). A real quartic polynomial λ4 + aλ3 + bλ2 + cλ + d is Schur stable iff:\n|d| < 1, |a| < d+ 3, |a+ c| < b+ d+ 1, (1− d)2b+ c2 − a(1 + d)c− (1 + d)(1− d)2 + a2d < 0. (C.25)\nWith equation C.23 and Theorem C.1, it is straightforward to derive equation 3.8. With equation C.24 and Corollary 2.1, we can derive equation 3.9 without much effort.\nNow, let us study the relation between the convergence region of Jacobi OGD and GS OGD, as given in equation 3.8 and equation 3.9. Namely, we want to prove the last sentence of Theorem 3.3. The outline of our proof is as follows. We first show that each region of (α, β1, β2) described in equation 3.8 (the Jacobi region) is contained in the region described in equation 3.9 (the GS region). Since we are only studying one singular value, we slightly abuse the notations and rewrite βiσ as βi (i = 1, 2) and ασ as α. From equation 3.6 and equation 3.7, β1 and β2 can switch. WLOG, we assume β1 ≥ β2. There are four cases to consider:\n• β1 ≥ β2 > 0. The third Jacobi condition in equation 3.8 now is redundant, and we have α > β1 or α < β2 for both methods. Solving the quadratic feasibility condition for α gives:\n0 < β2 < 1, β2 ≤ β1 < β2 +\n√ 4 + 5β22\n2(1 + β22) , β1 < α <\nu+ √ u2 + tv\nt , (C.26)\nwhere u = (β1β2 + 1)(β1 + β2), v = β1β2(β1β2 + 1)(β1β2 − 3), t = (β21 + 1)(β22 + 1). On the other hand, assume α > β1, the first and third GS conditions are automatic. Solving the second gives:\n0 < β2 < 1, β2 ≤ β1 < −β2 +\n√ 8 + β22 2 , β1 < α < − 1 2 (β1+β2)+ 1 2 √ (β1 − β2)2 + 16.\n(C.27) Define f(β2) := −β2 + √ 8 + β22/2 and g(β2) := (β2 + √ 4 + 5β22)/(2(1 + β 2 2)), and one can show that\nf(β2) ≥ g(β2). (C.28)\nFurthermore, it can also be shown that given 0 < β2 < 1 and β2 ≤ β1 < g(β2), we have (u+ √ u2 + 4v)/t < −(β1 + β2)/2 + (1/2) √ (β1 − β2)2 + 16. (C.29)\n• β1 ≥ β2 = 0. The Schur condition for Jacobi and Gauss–Seidel updates reduces to:\nJacobi: 0 < β1 < 1, β1 < α < 2β1\n1 + β21 , (C.30)\nGS: 0 < β1 < √ 2, β1 < α < −β1 +\n√ 16 + β21\n2 . (C.31)\nOne can show that given β1 ∈ (0, 1), we have 2β1/(1 + β21) < (−β1 + √\n16 + β21)/2. • β1 ≥ 0 > β2. Reducing the first, second and fourth conditions of equation 3.8 yields:\nβ2 < 0, 0 < β1 < β2 +\n√ 4 + 5β22\n2(1 + β22) , β1 < α <\nu+ √ u2 + tv\nt . (C.32)\nThis region contains the Jacobi region. It can be similarly proved that even within this larger region, GS Schur condition equation 3.9 is always satisfied. • β2 ≤ β1 < 0. We have u < 0, tv < 0 and thus α < (u + √ u2 + tv)/t < 0. This\ncontradicts our assumption that α > 0.\nCombining the four cases above, we know that the Jacobi region is contained in the GS region.\nTo show the strict inclusion, take β1 = β2 = α/5 and α → 0. One can show that as long as α is small enough, all the Jacobi regions do not contain this point, each of which is described with a\nsingular value in equation 3.8. However, all the GS regions described in equation 3.9 contain this point.\nThe proof above is still missing some details. We provide the proofs of equation C.26, equation C.28, equation C.29 and equation C.32 in the sub-sub-sections below, with the help of Mathematica, although one can also verify these claims manually. Moreover, a one line proof of the inclusion can be given with Mathematica code, as shown in Section C.4.5." }, { "heading": "C.4.1 PROOF OF EQUATION C.26", "text": "The fourth condition of equation 3.8 can be rewritten as:\nα2t− 2uα− v < 0, (C.33)\nwhere u = (β1β2 + 1)(β1 + β2), v = β1β2(β1β2 + 1)(β1β2 − 3), t = (β21 + 1)(β22 + 1). The discriminant is 4(u2 + tv) = (1 − β1β2)2(1 + β1β2)(β21 + β22 + β21β22 − β1β2) ≥ 0. Since if β1β2 < 0,\nβ21 + β 2 2 + β 2 1β 2 2 − β1β2 = β21 + β22 + β1β2(β1β2 − 1) > 0,\nIf β1β2 ≥ 0, β21 + β 2 2 + β 2 1β 2 2 − β1β2 = (β1 − β2)2 + β1β2(1 + β1β2) ≥ 0, where we used |β1β2| < 1 in both cases. So, equation C.33 becomes:\nu− √ u2 + tv\nt < α <\nu+ √ u2 + tv\nt . (C.34)\nCombining with α > β1 or α < β2 obtained from the second condition, we have:\nu− √ u2 + tv\nt < α < β2 or β1 < α <\nu+ √ u2 + tv\nt . (C.35)\nThe first case is not possible, with the following code:\nu = (b1 b2 + 1) (b1 + b2); v = b1 b2 (b1 b2 + 1) (b1 b2 - 3); t = (b1^2 + 1) (b2^2 + 1); Reduce[b2 t > u - Sqrt[u^2 + t v] && b1 >= b2 > 0 && Abs[b1 b2] < 1],\nand we have:\nFalse.\nTherefore, the only possible case is β1 < α < (u+ √ u2 + tv)/t. Where the feasibility region can be solved with:\nReduce[b1 t < u + Sqrt[u^2+t v]&&b1>=b2>0&&Abs[b1 b2] < 1].\nWhat we get is:\n0<b2<1 && b2<=b1<b2/(2 (1+b2^2))+1/2 Sqrt[(4+5 b2^2)/(1+b2^2)^2].\nTherefore, we have proved equation C.26." }, { "heading": "C.4.2 PROOF OF EQUATION C.28", "text": "With\nReduce[-(b2/2) + Sqrt[8 + b2^2]/2 >= (b2 + Sqrt[4 + 5 b2^2])/(2 (1 + b2^2)) && 0 < b2 < 1],\nwe can remove the first constraint and get:\n0 < b2 < 1." }, { "heading": "C.4.3 PROOF OF EQUATION C.29", "text": "Given\nReduce[-1/2 (b1 + b2) + 1/2 Sqrt[(b1 - b2)^2 + 16] > (u + Sqrt[u^2 + t v])/t &&\n0 < b2 < 1 && b2 <= b1 < (b2 + Sqrt[4 + 5 b2^2])/(2 (1 + b2^2)), {b2, b1}],\nwe can remove the first constraint and get:\n0 < b2 < 1 && b2 <= b1 < b2/(2 (1 + b2^2)) + 1/2 Sqrt[(4 + 5 b2^2)/(1 + b2^2)^2]." }, { "heading": "C.4.4 PROOF OF EQUATION C.32", "text": "The second Jacobi condition simplifies to α > β1 and the fourth simplifies to equation C.34. Combining with the first Jacobi condition:\nReduce[Abs[b1 b2] < 1 && a > b1 && (u - Sqrt[u^2 + t v])/t < a < (u + Sqrt[u^2 + t v])/t && b1 >= 0 && b2 < 0, {b2, b1, a} ] // Simplify,\nwe have:\nb2 < 0 && b1 > 0 && b2/(1 + b2^2) + Sqrt[(4 + 5 b2^2)/(1 + b2^2)^2] > 2 b1 && b1 < a < (b1 + b2 + b1^2 b2 + b1 b2^2)/((1 + b1^2) (1 + b2^2)) +\nSqrt[((-1 + b1 b2)^2 (b1^2 + b2^2 + b1 b2 (-1 + b2^2) + b1^3 (b2 + b2^3)))/((1 + b1^2)^2 (1 + b2^2)^2)].\nThis can be further simplified to achieve equation C.32." }, { "heading": "C.4.5 ONE LINE PROOF", "text": "In fact, there is another very simple proof:\nReduce[ForAll[{b1, b2, a}, (a - b1) (a - b2) > 0 && (a + b1) (a + b2) > -4 && Abs[b1 b2] < 1 && a^2 (b1^2 + 1) (b2^2 + 1) < (b1 b2 + 1) (2 a (b1 + b2) + b1 b2 (b1 b2 - 3)), (a - b1) (a - b2) > 0 && (a + b1) (a + b2) < 4 && (a b1 + 1) (a b2 + 1) > (1 + b1 b2)^2], {b2, b1, a}] True.\nHowever, this proof does not tell us much information about the range of our variables." }, { "heading": "C.5 PROOF OF THEOREM 3.4: SCHUR CONDITIONS OF MOMENTUM", "text": "Theorem 3.4 (momentum). For the generalized momentum method with α1 = α2 = α, the Jacobi updates never converge, while the GS updates converge iff for any singular value σ of E, we have:\n|β1β2| < 1, | − α2σ2 + β1 + β2 + 2| < β1β2 + 3, 4(β1 + 1)(β2 + 1) > α2σ2, α2σ2β1β2 < (1− β1β2)(2β1β2 − β1 − β2). (3.12)\nThis condition implies that at least one of β1, β2 is negative." }, { "heading": "C.5.1 SCHUR CONDITIONS OF JACOBI AND GS UPDATES", "text": "Jacobi condition We first rename ασ as al and β1, β2 as b1, b2. With Theorem C.1:\n{Abs[d] < 1, Abs[a] < d + 3, a + b + c + d + 1 > 0, -a + b - c + d + 1 > 0, (1 - d)^2 b - (c - a d) (a - c) - (1 + d) (1 - d)^2 < 0} /. {a -> -2 - b1 - b2, b -> al^2 + 1 + 2 (b1 + b2) + b1 b2, c -> -b1 - b2 - 2 b1 b2, d -> b1 b2} // FullSimplify.\nWe obtain:\n{Abs[b1 b2] < 1, Abs[2 + b1 + b2] < 3 + b1 b2, al^2 > 0, al^2 + 4 (1 + b1) (1 + b2) > 0, al^2 (-1 + b1 b2)^2 < 0}.\nThe last condition is never satisfied and thus Jacobi momentum never converges.\nGauss–Seidel condition With Theorem C.1, we compute:\n{Abs[d] < 1, Abs[a] < d + 3, a + b + c + d + 1 > 0, -a + b - c + d + 1 > 0, (1 - d)^2 b + c^2 - a (1 + d) c - (1 + d) (1 - d)^2 + a^2 d < 0} /. {a -> al^2 - 2 - b1 - b2, b -> 1 + 2 (b1 + b2) + b1 b2, c -> -b1 - b2 - 2 b1 b2, d -> b1 b2} // FullSimplify.\nThe result is:\n{Abs[b1 b2] < 1, Abs[2 - al^2 + b1 + b2] < 3 + b1 b2, al^2 > 0, 4 (1 + b1) (1 + b2) > al^2, al^2 (b1 + b2 + (-2 + al^2 - b1) b1 b2 + b1 (-1 + 2 b1) b2^2) < 0},\nwhich can be further simplified to equation 3.12." }, { "heading": "C.5.2 NEGATIVE MOMENTUM", "text": "With Theorem 3.4, we can actually show that in general at least one of β1 and β2 must be negative. There are three cases to consider, and in each case we simplify equation 3.12:\n1. β1β2 = 0. WLOG, let β2 = 0, and we obtain\n−1 < β1 < 0 and α2σ2 < 4(1 + β1). (C.36)\n2. β1β2 > 0. We have\n−1 < β1 < 0, −1 < β2 < 0 , α2σ2 < 4(1 + β1)(1 + β2). (C.37)\n3. β1β2 < 0. WLOG, we assume β1 ≥ β2. We obtain: −1 < β2 < 0, 0 < β1 < min { − 1 3β2 , ∣∣∣− β2 1 + 2β2 ∣∣∣} . (C.38) The constraints for α are α > 0 and:\nmax\n{ (1− β1β2)(2β1β2 − β1 − β2)\nβ1β2 , 0\n} < α2σ2 < 4(1 + β1)(1 + β2). (C.39)\nThese conditions can be further simplified by analyzing all singular values. They only depend on σ1 and σn, the largest and the smallest singular values. Now, let us derive equation C.37, equation C.38 and equation C.39 more carefully. Note that we use a for ασ." }, { "heading": "C.5.3 PROOF OF EQUATION C.37", "text": "Reduce[Abs[b1 b2] < 1 && Abs[-a^2 + b1 + b2 + 2] < b1 b2 + 3 && 4 (b1 + 1) (b2 + 1) > a^2 && a^2 b1 b2 < (1 - b1 b2) (2 b1 b2 - b1 - b2) && b1 b2 > 0 && a > 0, {b2, b1, a}]\n-1 < b2 < 0 && -1 < b1 < 0 && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]" }, { "heading": "C.5.4 PROOF OF EQUATIONS C.38 AND C.39", "text": "Reduce[Abs[b1 b2] < 1 && Abs[-a^2 + b1 + b2 + 2] < b1 b2 + 3 && 4 (b1 + 1) (b2 + 1) > a^2 && a^2 b1 b2 < (1 - b1 b2) (2 b1 b2 - b1 - b2) && b1 b2 < 0 && b1 >= b2 && a > 0, {b2, b1, a}]\n(-1 < b2 <= -(1/3) && ((0 < b1 <= b2/(-1 + 2 b2) && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]) || (b2/(-1 + 2 b2) < b1 < -(1/(3 b2)) &&\nSqrt[(-b1 - b2 + 2 b1 b2 + b1^2 b2 + b1 b2^2 - 2 b1^2 b2^2)/( b1 b2)] < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]))) || (-(1/3) <\nb2 < 0 && ((0 < b1 <= b2/(-1 + 2 b2) && 0 < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2]) || (b2/(-1 + 2 b2) < b1 < -(b2/(1 + 2 b2)) &&\nSqrt[(-b1 - b2 + 2 b1 b2 + b1^2 b2 + b1 b2^2 - 2 b1^2 b2^2)/( b1 b2)] < a < Sqrt[4 + 4 b1 + 4 b2 + 4 b1 b2])))\nSome further simplication yields equation C.38 and equation C.39." }, { "heading": "D PROOFS IN SECTION 4", "text": "For bilinear games and gradient-based methods, a Schur condition defines the region of convergence in the parameter space, as we have seen in Section 3. However, it is unknown which setting of parameters has the best convergence rate in a Schur stable region. We explore this problem now. Due to Theorem 3.1, we do not need to study GD. The remaining cases are EG, OGD and GS momentum (Jacobi momentum does not converge due to Theorem 3.4). Analytically (Section D.1 and D.2), we study the optimal linear rates for EG and special cases of generalized OGD (Jacobi OGD with β1 = β2 and Gauss–Seidel OGD with β2 = 0). The special cases include the original form of OGD. We also provide details for the numerical method described at the end of Section 4.\nThe optimal spectral radius is obtained by solving another min-max optimization problem:\nmin θ max σ∈Sv(E) r(θ, σ), (D.1)\nwhere θ denotes the collection of all hyper-parameters, and r(θ, σ) is defined as the spectral radius function that relies on the choice of parameters and the singular value σ. We also use Sv(E) to denote the set of singular values of E.\nIn general, the function r(θ, σ) is non-convex and thus difficult to analyze. However, in the special case of quadratic characteristic polynomials, it is possible to solve equation D.1. This is how we will analyze EG and special cases of OGD, as r(θ, σ) can be expressed using root functions of quadratic polynomials. For cubic and quartic polynomials, it is in principle also doable as we have analytic formulas for the roots. However, these formulas are extremely complicated and difficult to optimize and we leave it for future work. For EG and OGD, we will show that the optimal linear rates depend only on the conditional number κ := σ1/σn.\nFor simplicity, we always fix α1 = α2 = α > 0 using the scaling symmetry studied in Section 3." }, { "heading": "D.1 PROOF OF THEOREM 4.1: OPTIMAL CONVERGENCE RATE OF EG", "text": "Theorem 4.1 (EG optimal). Both Jacobi and GS EG achieve the optimal exponent of linear convergence r∗ = (κ2 − 1)/(κ2 + 1) at α→ 0 and β1 = β2 = 2/(σ21 + σ2n). As κ→∞, r∗ → 1− 2/κ2." }, { "heading": "D.1.1 JACOBI EG", "text": "For Jacobi updates, if β1 = β2 = β, by solving the roots of equation 3.2, the min-max problem is:\nmin α,β max σ∈Sv(E)\n√ α2σ2 + (1− βσ2)2. (D.2)\nIf σ1 = σn = σ, we can simply take α→ 0 and β = 1/σ2 to obtain a super-linear convergence rate. Otherwise, let us assume σ1 > σn. We obtain a lower bound by taking α → 0 and equation D.2 reduces to:\nmin β max σ∈Sv(E)\n|1− βσ2|. (D.3)\nThe optimal solution is given at 1− βσ2n = βσ21 − 1, yielding β = 2/(σ21 + σ2n). The optimal radius is thus (σ21 − σ2n)/(σ21 + σ2n) since the lower bound equation D.3 can be achieved by taking α→ 0. From general β1, β2, it can be verified that the optimal radius is achieved at β1 = β2 and the problem reduces to the previous case. The optimization problem is:\nmin α,β1,β2 max σ∈Sv(E) r(α, β1, β2, σ), (D.4)\nwhere\nr(α, β1, β2, σ) = {√ (1− β1σ2)(1− β2σ2) + α2σ2 4α2 > (β1 − β2)2σ2, |1− 12 (β1 + β2)σ 2|+ 12 √ (β1 − β2)2σ4 − 4α2σ2 4α2 ≤ (β1 − β2)2σ2.\nIn the first case, a lower bound is obtained at α2 = (β1 − β2)2σ2/4 and thus the objective only depends on β1 + β2. In the second case, the lower bound is obtained at α → 0 and β1 → β2. Therefore, the function is optimized at β1 = β2 and α→ 0. Our analysis above does not mean that α → 0 and β1 = β2 = 2/(σ21 + σ2n) is the only optimal choice. For example, when σ1 = σn = 1, we can take β1 = 1 + α and β2 = 1 − α to obtain a super-linear convergence rate." }, { "heading": "D.1.2 GAUSS–SEIDEL EG", "text": "For Gauss–Seidel updates and β1 = β2 = β, we do the following optimization:\nmin α,β max σ∈Sv(E) r(α, β, σ), (D.5)\nwhere by solving equation 3.3:\nr(α, β, σ) = { 1− βσ2 α2σ2 < 4(1− βσ2), α2\n2 σ 2 − (1− βσ2) +\n√ α2σ2(α2σ2 − 4(1− βσ2))/2 α2σ2 ≥ 4(1− βσ2).\nr(σ, β, σ2) is quasi-convex in σ2, so we just need to minimize over α, β at both end points. Hence, equation D.5 reduces to:\nmin α,β\nmax{r(α, β, σ1), r(α, β, σn)}.\nBy arguing over three cases: α2 + 4β < 4/σ21 , α 2 + 4β > 4/σ2n and 4/σ 2 1 ≤ α2 + 4β ≤ 4/σ2n, we find that the minimum (κ2 − 1)/(κ2 + 1) can be achieved at α→ 0 and β = 2/(σ21 + σ2n), the same as Jacobi EG. This is because α→ 0 decouples x and y and it does not matter whether the update is Jacobi or GS.\nFor general β1, β2, it can be verified that the optimal radius is achieved at β1 = β2. We do the following transformation: βi → ξi − α2/2, so that the characteristic polynomial becomes:\n(λ− 1)2 + (ξ1 + ξ2)σ2(λ− 1) + α2σ2 + (ξ1 − α2/2)(ξ2 − α2/2)σ4 = 0. (D.6) Denote ξ1 + ξ2 = φ, and (ξ1 − α2/2)(ξ2 − α2/2) = ν, we have:\nλ2 − (2− σ2φ)λ+ 1− σ2φ+ σ4v + σ2α2 = 0. (D.7) The discriminant is ∆ := σ2(σ2(φ2 − 4ν)− 4α2). We discuss two cases:\n1. φ2 − 4ν < 0. We are minimizing:\nmin α,u,v\n√ 1 + (α2 − φ)σ21 + σ41ν ∨ √ 1 + (α2 − φ)σ2n + σ4nν,\nwith a ∨ b := max{a, b} a shorthand. A minimizer is at α → 0 and ν → φ2/4 (since φ2 < 4ν), where β1 = β2 = 2/(σ21 + σ 2 n) and α→ 0.\n2. φ2 − 4ν ≥ 0. A lower bound is: min u |1− φσ21/2| ∨ |1− φσ2n/2|,\nwhich is obtained iff 4α2 ∼ (φ2 − 4ν)t for all σ2. This is only possible if α → 0 and φ2 → 4ν, which yields β1 = β2 = 2/(σ21 + σ2n).\nFrom what has been discussed, the optimal radius is (κ2 − 1)/(κ2 + 1) which can be achieved at β1 = β2 = 2/(σ 2 1 + σ 2 n) and α → 0. Again, this might not be the only choice. For instance, take σ1 = σ 2 n = 1, from equation 3.3, a super-linear convergence rate can be achieved at β1 = 1 and β2 = 1− α2." }, { "heading": "D.2 PROOF OF THEOREM 4.2: OPTIMAL CONVERGENCE RATE OF OGD", "text": "Theorem 4.2 (OGD optimal). For Jacobi OGD with β1 = β2 = β, to achieve the optimal linear rate, we must have α ≤ 2β. For the original OGD with α = 2β, the optimal linear rate r∗ satisfies\nr2∗ = 1\n2 +\n1\n4 √\n2σ21\n√ (σ21 − σ2n)(5σ21 − σ2n + √ (σ21 − σ2n)(9σ21 − σ2n)), (D.8)\nat\nβ∗ = 1\n4 √ 2\n√ 3σ41 − (σ21 − σ2n)3/2 √ 9σ21 − σ2n + 6σ21σ2n − σ4n σ41σ 2 n . (D.9)\nIf κ → ∞, r∗ ∼ 1 − 1/(6κ2). For Gauss–Seidel OGD with β2 = 0, the optimal linear rate is r∗ = √ (κ2 − 1)/(κ2 + 1), at α = √ 2/σ1 and β1 = √ 2σ1/(σ 2 1 + σ 2 n). If κ→∞, r∗ ∼ 1− 1/κ2.\nFor OGD, the characteristic polynomials equation 3.6 and equation 3.7 are quartic and cubic separately, and thus optimizing the spectral radii for generalized OGD is difficult. However, we can study two special cases: for Jacobi OGD, we take β1 = β2; for Gauss–Seidel OGD, we take β2 = 0. In both cases, the spectral radius functions can be obtained by solving quadratic polynomials." }, { "heading": "D.2.1 JACOBI OGD", "text": "We assume β1 = β2 = β in this subsection. The characteristic polynomial for Jacobi OGD equation 3.6 can be written as: λ2(λ− 1)2 + (λα− β)2σ2 = 0. (D.10) Factorizing it gives two equations which are conjugate to each other:\nλ(λ− 1)± i(λα− β)σ = 0. (D.11) The roots of one equation are the conjugates of the other equation. WLOG, we solve λ(λ − 1) + i(λα− β)σ = 0 which gives (1/2)(u± v), where\nu = 1− iασ, v = √ 1− α2σ2 − 2i(α− 2β)σ. (D.12)\nDenote ∆1 = 1− α2σ2 and ∆2 = 2(α− 2β)σ. If α ≥ 2β, v can be expressed as:\nv = 1√ 2\n(√√ ∆21 + ∆ 2 2 + ∆1 − i √√ ∆21 + ∆ 2 2 −∆1 ) =:\n1√ 2 (a− ib), (D.13)\ntherefore, the spectral radius r(α, β, σ) satisfies:\nr(α, β, σ)2 = 1\n4\n( (1 + a/ √ 2)2 + (ασ + b/ √ 2)2 ) = 1\n4 (1 + α2σ2 +\n√ ∆21 + ∆ 2 2 + √ 2(bσα+ a)),\n(D.14) and the minimum is achieved at α = 2β. From now on, we assume α ≤ 2β, and thus v = a + ib. We write: r(α, β, σ)2 = 1\n4 max{\n( (1 + a/ √ 2)2 + (ασ − b/ √ 2)2 ) , ( (1− a/ √ 2)2 + (ασ + b/ √ 2)2 ) },\n= 1\n4 (1 + α2σ2 +\n√ ∆21 + ∆ 2 2 + √ 2|bσα− a|).\n=\n{ 1 4 (1 + α 2σ2 + √ ∆21 + ∆ 2 2 − √\n2(bσα− a)) 0 < ασ ≤ 1, 1 4 (1 + α 2σ2 + √ ∆21 + ∆ 2 2 + √ 2(bσα− a)) ασ > 1. (D.15)\nThis is a non-convex and non-differentiable function, which is extremely difficult to optimize. At α = 2β, in this case, a = √ 1− 4β2σ2sign(1− 4β2σ2) and b = √\n4β2σ2 − 1sign(4β2σ2 − 1). The sign function sign(x) is defined to be 1 if x > 0 and 0 otherwise. The function we are optimizing is a quasi-convex function:\nr(β, σ)2 =\n{ 1 2 (1 + √ 1− 4β2σ2) 4β2σ2 ≤ 1,\n2β2σ2 + βσ √ 4β2σ2 − 1 4β2σ2 > 1. (D.16)\nWe are maximizing over σ and minimizing over β. There are three cases:\n• 4β2σ21 ≤ 1. At 4β2σ21 = 1, the optimal radius is:\nr2∗ = 1\n2\n( 1 + √ 1− 1\nκ2\n) .\n• 4β2σ2n ≥ 1. At 4β2σ2n = 1, the optimal radius satisfies:\nr2∗ = κ2 2 + κ 2\n√ κ2 − 1.\n• 4β2σ2n ≤ 1 and 4β2σ21 ≥ 1. The optimal β is achieved at:\n1\n2\n( 1 + √ 1− 4β2σ2n ) = 2β2σ21 + βσ1 √ 4β2σ21 − 1.\nThe solution is unique since the left is decreasing and the right is increasing. The optimal β is:\nβ∗ = 1\n4 √ 2\n√ 3σ41 − (σ21 − σ2n)3/2 √ 9σ21 − σ2n + 6σ21σ2n − σ4n σ41σ 2 n . (D.17)\nThe optimal radius satisfies:\nr2∗ = 1\n2 +\n1\n4 √\n2σ21\n√ (σ21 − σ2n)(5σ21 − σ2n + √ (σ21 − σ2n)(9σ21 − σ2n)). (D.18)\nThis is the optimal solution among the three cases. If σ2n/σ 2 1 is small enough we have r2 ∼ 1− 1/(3κ2)." }, { "heading": "D.2.2 GAUSS–SEIDEL OGD", "text": "In this subsection, we study Gauss–Seidel OGD and fix β2 = 0. The characteristic polynomial equation 3.7 now reduces to a quadratic polynomial:\nλ2 + (α2σ2 − 2)λ+ 1− αβ1σ2 = 0.\nFor convenience, we reparametrize β1 → β/α. So, the quadratic polynomial becomes:\nλ2 + (α2σ2 − 2)λ+ 1− βσ2 = 0.\nWe are doing a min-max optimization minα,β maxσ r(α, β, σ), where r(α, β, σ) is:\nr(α, β, σ) =\n{√ 1− βσ2 α4σ2 < 4(α2 − β)\n1 2 |α 2σ2 − 2|+ 12 √ α4σ4 − 4(α2 − β)σ2 α4σ2 ≥ 4(α2 − β).\n(D.19)\nThere are three cases to consider:\n• α4σ21 ≤ 4(α2 − β). We are minimizing 1− βσ2n over α and β. Optimizing over β1 gives β = α2−α4σ21/4. Then we minimize over α and obtain α2 = 2/σ21 . The optimal β = 1/σ21 and the optimal radius is √ 1− 1/κ2.\n• α4σ2n > 4(α2 − β). Fixing α, the optimal β = α2 − α4σ2n/4, and we are solving\nmin α max\n{ 1\n2 |α2σ21 − 2|+\n1 2 α2 √ σ21(σ 2 1 − σ2n), 1 2 |α2σ2n − 2| } .\nWe need to discuss three cases: α2σ2n > 2, α 2σ21 < 2 and 2/σ 2 1 < α 2 < 2/σ2n. In the first case, the optimal radius is κ2 − 1 + κ √\n(κ2 − 1). In the second case, α2 → 2/σ21 and the optimal radius is √ 1− 1/κ2. In the third case, the\noptimal radius is also √\n1− 1/κ2 minimized at α2 → 2/σ21 . • α4σ21 > 4(α2 − β) and α4σ2n < 4(α2 − β). In this case, we have α2σ21 < 4. Otherwise, r(α, β, σ1) > 1. We are minimizing over:\nmax{ √ 1− βσ2n, 1\n2 |α2σ21 − 2|+\n1\n2\n√ α4σ41 − 4α2σ21 + 4βσ21}.\nThe minimum over α is achieved at α2σ21 = 2, and β = 2/(σ 2 1 +σ 2 n), this gives α = √ 2/σ1\nand β1 = √ 2σ1/(σ 2 1 + σ 2 n). The optimal radius is r∗ = √ (κ2 − 1)/(κ2 + 1).\nOut of the three cases, the optimal radius is obtained in the third case, where r ∼ 1− 1/κ2. This is better than Jacobi OGD, but still worse than the optimal EG." }, { "heading": "D.3 NUMERICAL METHOD", "text": "We first prove Lemma 4.1: Lemma 4.1. A polynomial p(λ) is r-Schur stable iff p(rλ) is Schur stable. Proof. Denote p(λ) = ∏n i=1(λ− λi). We have p(rλ) ∝ ∏n i=1(λ− λi/r), and:\n∀i ∈ [n], |λi| < r ⇐⇒ ∀i ∈ [n], |λi/r| < 1. (D.20)\nWith Lemma 4.1 and Corollary 2.1, we have the following corollary: Corollary D.1. A real quadratic polynomial λ2 + aλ+ b is r-Schur stable iff b < r2, |a| < r+ b/r; A real cubic polynomial λ3 + aλ2 + bλ + c is r-Schur stable iff |c| < r3, |ar2 + c| < r3 + br, br4 − acr2 < r6 − c2; A real quartic polynomial λ4 + aλ3 + bλ2 + cλ + d is r-Schur stable iff |cr5 − adr3| < r8 − d2, |ar2 + c| < br + d/r + r3, and\nb < r2 + dr−2 + r2 (cr2 − ad)(ar2 − c)\n(d− r4)2 .\nProof. In Corollary 2.1, rescale the coefficients according to Lemma 4.1.\nWe can use the corollaries above to find the regions where r-Schur stability is possible, i.e., a linear rate of exponent r. A simple algorithm might be to start from r0 = 1, find the region S0. Then recursively take rt+1 = srt and find the Schur stable region St+1 inside St. If the region is empty then stop the search and return St. s can be taken to be, say, 0.99. Formally, this algorithm can be described as follows in Algorithm 1:\nr0 = 1, t = 0, s = 0.99; Find the r0-Schur region S0; while St is not empty do\nrt+1 = srt; Find the rt+1-Schur region St+1; t = t+ 1;\nend Algorithm 1: Numerical method for finding the optimal convergence rate\nIn this algorithm, Corollary D.1 can be applied to obtain any r-Schur region." }, { "heading": "E SUPPLEMENTARY MATERIAL FOR SECTIONS 5 AND 6", "text": "We provide supplementary material for Sections 5 and 6. We first prove that when learning the mean of a Gaussian, WGAN is locally a bilinear game in Appendix E.1. For mixtures of Gaussians, we provide supplementary experiments about Adam in Appendix E.2. This result implies that in some cases, Jacobi updates are better than GS updates. We further verify this claim in Appendix E.3 by showing an example of OGD on bilinear games. Optimizing the spectral radius given a certain singular value is possible numerically, as in Appendix E.4." }, { "heading": "E.1 WASSERSTEIN GAN", "text": "Inspired by Daskalakis et al. (2018), we consider the following WGAN (Arjovsky et al., 2017):\nf(φ,θ) = min φ max θ Ex∼N (v,σ2I)[s(θ\n>x)]− Ez∼N (0,σ2I)[s(θ>(z + φ))], (E.1)\nwith s(x) := 1/(1 + e−x) the sigmoid function. We study the local behavior near the saddle point (v,0), which depends on the Hessian:[\n∇2φφ ∇2φθ ∇2θφ ∇2θθ\n] = [ −Eφ[s′′(θ>z)θθ>] −Eφ[s′′(θ>z)θz> + s′(θ>z)I]\n(∇2φθ)> Ev[s′′(θ>x)xx>]− Eφ[s′′(θ>z)zz>]\n] ,\nwith Ev a shorthand for Ex∼N (v,σ2I) and Eφ for Ez∼N (φ,σ2I). At the saddle point, the Hessian is simplified as:\n[ ∇2φφ ∇2φθ ∇2θφ ∇2θθ ] = [ 0 −s′(0)I −s′(0)I 0 ] = [ 0 −I/4 −I/4 0 ] .\nTherefore, this WGAN is locally a bilinear game." }, { "heading": "E.2 MIXTURES OF GAUSSIANS WITH ADAM", "text": "Given the same parameter settings as in Section 5, we train the vanilla GAN using Adam, with the step size α = 0.0002, and β1 = 0.9, β2 = 0.999. As shown in Figure 6, Jacobi updates converge faster than the corresponding GS updates." }, { "heading": "E.3 JACOBI UPDATES MAY CONVERGE FASTER THAN GS UPDATES", "text": "Take α = 0.9625, β1 = β2 = β = 0.5722, and σ = 1, the Jacobi and GS OGD radii are separately 0.790283 and 0.816572 (by solving equation 3.6 and equation 3.7), which means that Jacobi OGD has better performance for this setting of parameters. A more intuitive picture is given as Figure 7, where we take β1 = β2 = β." }, { "heading": "E.4 SINGLE SINGULAR VALUE", "text": "We minimize r(θ, σ) for a given singular value numerically. WLOG, we take σ = 1, since we can rescale parameters to obtain other values of σ. We implement grid search for all the parameters within the range [−2, 2] and step size 0.05. For the step size α, we take it to be positive. We use {a, b, s} as a shorthand for {a, a+ s, a+ 2s, . . . , b}.\n• We first numerically solve the characteristic polynomial for Jacobi OGD equation 3.6, fixing α1 = α2 = α with scaling symmetry. With α ∈ {0, 2, 0.05}, βi ∈ {−2, 2, 0.05}, the best parameter setting is α = 0.7, β1 = 0.1 and β2 = 0.6. β1 and β2 can be switched. The optimal radius is 0.6.\n• We also numerically solve the characteristic polynomial for Gauss–Seidel OGD equation 3.7, fixing α1 = α2 = α with scaling symmetry. With α ∈ {0, 2, 0.05}, βi ∈ {−2, 2, 0.05}, the best parameter setting is α = 1.4, β1 = 0.7 and β2 = 0. β1 and β2 can be switched. The optimal rate is 1/(5 √ 2). This rate can be further improved to be zero where α = √ 2,\nβ1 = 1/ √ 2 and β2 = 0.\n• Finally, we numerically solve the polynomial for Gauss–Seidel momentum equation 3.11, with the same grid. The optimal parameter choice is α = 1.8, β1 = −0.1 and β2 = −0.05. β1 and β2 can be switched. The optimal rate is 0.5." }, { "heading": "F SPLITTING METHOD", "text": "In this appendix, we interpret the gradient-based algorithms (except PP) we have studied in this paper as splitting methods (Saad, 2003), for both Jacobi and Gauss–Seidel updates. By doing this, one can understand our algorithms better in the context of numerical linear algebra and compare our results in Section 3 with the Stein–Rosenberg theorem." }, { "heading": "F.1 JACOBI UPDATES", "text": "From equation 2.2, finding a saddle point is equivalent to solving:\nSz := [ 0 E −E> 0 ] [ x y ] = [ −b c ] =: d. (F.1)\nNow, we try to understand the Jacobi algorithms using splitting method. For GD and EG, the method splits S intoM −N and solve\nzt+1 = M −1Nzt +M −1d. (F.2) For GD, we can obtain that:\nM =\n[ α−11 I 0\n0 α−12 I\n] , N = [ α−11 I −E E> α−12 I ] . (F.3)\nFor EG, we need to compute an inverse:\nM−1 = [ α1I −β1E β2E > α2I ] , N = M − S. (F.4)\nGiven det(α1α2I + β1β2EE>) 6= 0, the inverse always exists. The splitting method can also work for second-step methods, such as OGD and momentum. We split S = M −N − P and solve:\nzt+1 = M −1Nzt +M −1Pzt−1 +M −1d. (F.5)\nFor OGD, we have:\nM =\n[ I\nα1−β1 0\n0 Iα2−β2\n] , N = Iα1−β1 − α1Eα1−β1 α2E >\nα2−β2 I α2−β2\n , P = [ 0 β1Eα1−β1− β2E>α2−β2 0 ] . (F.6)\nFor the momentum method, we can write:\nM =\n[ α−11 I 0\n0 α−12 I\n] , N = [ 1+β1 α1 I −E\nE> 1+β2α2 I\n] , P = [ − β1α1 I 0\n0 − β2α2 I\n] . (F.7)" }, { "heading": "F.2 GAUSS–SEIDEL UPDATES", "text": "Now, we try to understand the GS algorithms using splitting method. For GD and EG, the method splits S intoM −N and solve\nzt+1 = M −1Nzt +M −1d. (F.8) For GD, we can obtain that:\nM = [ α−11 I 0 −E> α−12 I ] , N = [ α−11 I −E 0 α−12 I ] . (F.9)\nFor EG, we need to compute an inverse:\nM−1 =\n[ α1I −β1E\n(β2 + α1α2)E > α2(I − β1E>E)\n] , N = M − S. (F.10)\nThe splitting method can also work for second-step methods, such as OGD and momentum. We split S = M −N − P and solve:\nzt+1 = M −1Nzt +M −1Pzt−1 +M −1d. (F.11)\nFor OGD, we obtain:\nM = Iα1−β1 0 − α2E >\nα2−β2 I α2−β2\n , N = I α1−β1 − α1E α1−β1\n− β2E >\nα2−β2 I α2−β2\n , P = 0 β1Eα1−β1\n0 0 . (F.12) For the momentum method, we can write:\nM = [ α−11 I 0 −E> α−12 I ] , N = [ 1+β1 α1 I −E 0 1+β2α2 I ] , P = [ − β1α1 I 0 0 − β2α2 I ] . (F.13)" }, { "heading": "G SINGULAR BILINEAR GAMES", "text": "In this paper we considered the bilinear game when E is a non-singular square matrix for simplicity. Now let us study the general case where E ∈ Rm×n. As stated in Section 2, saddle points exist iff\nb ∈ R(E), c ∈ R(E>). (G.1)\nAssume b = Eb′, c = E>c′. One can shift the origin of x and y: x→ x− b′, y → y − c′, such that the linear terms cancel out. Therefore, the min-max optimization problem becomes:\nmin x∈Rm max y∈Rn\nx>Ey. (G.2)\nThe set of saddle points is:\n{(x,y)|y ∈ N (E),x ∈ N (E>)}. (G.3)\nFor all the first-order algorithms we study in this paper, x(t) ∈ x(0)+R(E) and y(t) ∈ y(0)+R(E>). Since for any matrix X ∈ Rp×q, R(X) ⊕ N (X>) = Rp, if the algorithm converges to a saddle point, then this saddle point is uniquely defined by the initialization:\nx∗ = P⊥Ex (0), y∗ = P⊥E>y (0), (G.4)\nwhere\nP⊥X := I −X†X, (G.5)\nis the orthogonal projection operator onto the null space ofX , andX† denotes the Moore–Penrose pseudoinverse. Therefore, the convergence to the saddle point is described by the distances of x(t) and y(t) to the null spaces N (E>) and N (E). We consider the following measure:\n∆2t = ||E†Ey(t)||2 + ||EE†x(t)||2, (G.6)\nas the Euclidean distance of z(t) = (x(t),y(t)) to the space of saddle points N (E>) × N (E). Consider the singular value decomposition of E:\nE = U [ Σr 0 0 0 ] V >, (G.7)\nwith Σr ∈ Rr×r diagonal and non-singular. Define:\nv(t) = V >y(t), u(t) = U>x(t), (G.8)\nand equation G.6 becomes:\n∆2t = ||v(t)r ||2 + ||u(t)r ||2, (G.9)\nwith vr denoting the sub-vector with the first r elements of v. Hence, the convergence of the bilinear game with a singular matrix E reduces to the convergence of the bilinear game with a non-singular matrix Σr, and all our previous analysis still holds." } ]
2,020
null
SP:0a523e5c8790b62fef099d7c5bec61bb18a2703c
[ "In this paper, the authors tackle the problem of multi-modal image-to-image translation by pre-training a style-based encoder. The style-based encoder is trained with a triplet loss that encourages similarity between images with similar styles and dissimilarity between images with different styles. The output of the encoder is a style embedding that helps differentiates different modes of image synthesis. When training the generator for image synthesis, the input combines an image in the source and a style embedding, and the loss is essentially the sum of image conditional GAN loss and perceptual loss. Additionally, the authors propose a mapping function to sample styles from a unit Gaussian distribution.", "The authors propose to use a non-end-to-end approach to the problem of multi-modal I2I. Firstly, a metric learning problem is solved to embed images into space, taking into account the pairwise style discrepancy (style is defined, e.g., based on VGG Gramians). As the notion of style is universal for similar datasets, this step further is shown to be generalizable. Secondly, the generator is trained on a supervised image translation tasks: the original image and the style, extracted from the target image, are fed to the generator, and the output is a translated image. Thirdly, style encoder and generator are simultaneously finetuned." ]
Image-to-image (I2I) translation aims to translate images from one domain to another. To tackle the multi-modal version of I2I translation, where input and output domains have a one-to-many relation, an extra latent input is provided to the generator to specify a particular output. Recent works propose involved training objectives to learn a latent embedding, jointly with the generator, that models the distribution of possible outputs. Alternatively, we study a simple, yet powerful pre-training strategy for multi-modal I2I translation. We first pre-train an encoder, using a proxy task, to encode the style of an image, such as color and texture, into a low-dimensional latent style vector. Then we train a generator to transform an input image along with a style-code to the output domain. Our generator achieves stateof-the-art results on several benchmarks with a training objective that includes just a GAN loss and a reconstruction loss, which simplifies and speeds up the training significantly compared to competing approaches. We further study the contribution of different loss terms to learning the task of multi-modal I2I translation, and finally we show that the learned style embedding is not dependent on the target domain and generalizes well to other domains.
[]
[ { "authors": [ "Amjad Almahairi", "Sai Rajeshwar", "Alessandro Sordoni", "Philip Bachman", "Aaron Courville" ], "title": "Augmented CycleGAN: Learning many-to-many mappings from unpaired data", "venue": null, "year": 2018 }, { "authors": [ "Qifeng Chen", "Vladlen Koltun" ], "title": "Photographic image synthesis with cascaded refinement networks", "venue": "In iccv,", "year": 2017 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In cvpr,", "year": 2018 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In ICCV, pp", "year": 2015 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "iclr,", "year": 2016 }, { "authors": [ "Hao Dong", "Simiao Yu", "Chao Wu", "Yike Guo" ], "title": "Semantic image synthesis via adversarial learning", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": null, "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In nips,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In nips,", "year": 2017 }, { "authors": [ "Yedid Hoshen", "Lior Wolf" ], "title": "Nam: Non-adversarial unsupervised domain mapping", "venue": "In eccv,", "year": 2018 }, { "authors": [ "Yedid Hoshen", "Ke Li", "Jitendra Malik" ], "title": "Non-adversarial image synthesis with generative latent nearest neighbors", "venue": "In cvpr,", "year": 2019 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-to-image translation", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Wu Jie" ], "title": "Facial expression recognition", "venue": "Facial-Expression-Recognition.Pytorch,", "year": 2018 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Animesh Karnewar", "Raghu Sesha Iyengar" ], "title": "Msg-gan: Multi-scale gradients gan for more stable and synchronized multi-scale image synthesis", "venue": null, "year": 1903 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of GANs for improved quality, stability, and variation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "cvpr,", "year": 2019 }, { "authors": [ "Taeksoo Kim", "Moonsu Cha", "Hyunsoo Kim", "Jung Kwon Lee", "Jiwon Kim" ], "title": "Learning to discover cross-domain relations with generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Anders Boesen Lindbo Larsen", "Søren Kaae Sønderby", "Hugo Larochelle", "Ole Winther" ], "title": "Autoencoding beyond pixels using a learned similarity metric", "venue": "icml,", "year": 2016 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew P Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": null, "year": 2017 }, { "authors": [ "Hsin-Ying Lee", "Hung-Yu Tseng", "Jia-Bin Huang", "Maneesh Kumar Singh", "Ming-Hsuan Yang" ], "title": "Diverse image-to-image translation via disentangled representations", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Implicit maximum likelihood estimation", "venue": "arXiv preprint arXiv:1809.09087,", "year": 2018 }, { "authors": [ "Ming-Yu Liu", "Thomas Breuel", "Jan Kautz" ], "title": "Unsupervised image-to-image translation networks", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Liqian Ma", "Xu Jia", "Stamatios Georgoulis", "Tinne Tuytelaars", "Luc Van Gool" ], "title": "Exemplar guided unsupervised image-to-image translation", "venue": "iclr,", "year": 2019 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Elman Mansimov", "Emilio Parisotto", "Jimmy Lei Ba", "Ruslan Salakhutdinov" ], "title": "Generating images from captions with attention", "venue": "iclr,", "year": 2016 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Ricardo Martin-Brualla" ], "title": "Space needle timelapse", "venue": null, "year": 2007 }, { "authors": [ "Ricardo Martin-Brualla", "Rohit Pandey", "Shuoran Yang", "Pavel Pidlypenskyi", "Jonathan Taylor", "Julien Valentin", "Sameh Khamis", "Philip Davidson", "Anastasia Tkach", "Peter Lincoln", "Adarsh Kowdle", "Christoph Rhemann", "Dan B Goldman", "Cem Keskin", "Steve Seitz", "Shahram Izadi", "Sean Fanello" ], "title": "LookinGood: Enhancing performance capture with real-time neural re-rendering", "venue": "In Proc. SIGGRAPH Asia,", "year": 2018 }, { "authors": [ "Moustafa Meshry", "Dan B Goldman", "Sameh Khamis", "Hugues Hoppe", "Rohit Pandey", "Noah Snavely", "Ricardo Martin-Brualla" ], "title": "Neural rerendering in the wild", "venue": "cvpr,", "year": 2019 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In icml,", "year": 2017 }, { "authors": [ "Omkar M Parkhi", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep face recognition", "venue": "In bmvc,", "year": 2015 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": null, "year": 2016 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "iclr,", "year": 2016 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "icml,", "year": 2016 }, { "authors": [ "Mihaela Rosca", "Balaji Lakshminarayanan" ], "title": "Variational approaches for auto-encoding generative adversarial", "venue": "networks. stat,", "year": 2017 }, { "authors": [ "Amélie Royer", "Konstantinos Bousmalis", "Stephan Gouws", "Fred Bertsch", "Inbar Mosseri", "Forrester Cole", "Kevin Murphy" ], "title": "Xgan: Unsupervised image-to-image translation for many-to-many mappings", "venue": "arXiv preprint arXiv:1711.05139,", "year": 2017 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In cvpr, pp", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2014 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In nips, pp", "year": 2015 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Justus Thies", "Michael Zollhöfer", "Matthias Nießner" ], "title": "Deferred neural rendering: Image synthesis using neural textures", "venue": "tog,", "year": 2019 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In nips,", "year": 2016 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Nikolai Yakovenko", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Video-to-video synthesis", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Generative image modeling using style and structure adversarial networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In eccv,", "year": 2016 }, { "authors": [ "Richard Zhang", "Jun-Yan Zhu", "Phillip Isola", "Xinyang Geng", "Angela S Lin", "Tianhe Yu", "Alexei A Efros" ], "title": "Real-time user-guided image colorization with learned deep priors. tog", "venue": null, "year": 2017 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In cvpr,", "year": 2018 }, { "authors": [ "Zhifei Zhang", "Yang Song", "Hairong Qi" ], "title": "Age progression/regression by conditional adversarial autoencoder", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Richard Zhang", "Deepak Pathak", "Trevor Darrell", "Alexei A Efros", "Oliver Wang", "Eli Shechtman" ], "title": "Toward multimodal image-to-image translation", "venue": "NeurIPS,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Image-to-Image (I2I) translation is the task of transforming images from one domain to another (e.g., semantic maps→ scenes, sketches→ photo-realistic images, etc.). Many problems in computer vision and graphics can be cast as I2I translation, such as photo-realistic image synthesis (Chen & Koltun (2017); Isola et al. (2017); Wang et al. (2018a)), super-resolution (Ledig et al. (2017)), colorization (Zhang et al. (2016; 2017a)), and inpainting (Pathak et al. (2016)). Therefore, I2I translation has recently received significant attention in the literature. One main challenge in I2I translation is the multi-modal nature for many such tasks – the relation between an input domain A and an output domain B is often times one-to-many, where a single input image IAi ∈ A can be mapped to different output images from domain B. For example, a sketch of a shoe or a handbag can be mapped to corresponding objects with different colors or styles, or a semantic map of a scene can be mapped to many scenes with different appearance, lighting and/or weather conditions. Since I2I translation networks typically learn one-to-one mappings due to their deterministic nature, an extra input is required to specify an output mode to which an input image will be translated. Simply injecting extra random noise as input proved to be ineffective as shown in (Isola et al. (2017); Zhu et al. (2017b)), where the generator network just learns to ignore the extra noise and collapses to a single or few modes (which is one form of the mode collapse problem). To overcome this problem, Zhu et al. (2017b) proposed BicycleGAN, which learns to encode the distribution of different possible outputs into a latent vector z, and then learns a deterministic mapping G : (A, z)→ B. So, depending on the latent vector z, a single input IAi ∈ A can be mapped to multiple outputs in B. While BicycleGAN requires paired training data, several works (Lee et al. (2018); Huang et al. (2018)) extended it to the unsupervised case, where images in domains A and B are not in correspondence (‘unpaired’). One main component of unpaired I2I is a cross-cycle consistency constraint, where the network generates an intermediate output by swapping the styles of a pair of images, then swaps the style between the intermediate output again to reconstruct the original images. This enforces that the latent vector z preserves the encoded style information when translated from an image i to another image j and back to image i again. This constraint can also be applied to paired training data, where it encourages style/attribute transfer between images. However, training BicycleGAN (Zhu et al. (2017b)) or its unsupervised counterparts (Huang et al. (2018); Lee et al. (2018)) is not trivial. For example,\nBicycleGAN combines the objectives of both conditional Variational Auto-Encoders (cVAEs) (Sohn et al. (2015)) and a conditional version of Latent Regressor GANs (cLR-GANs) (Donahue et al. (2016); Dumoulin et al. (2016)) to train their network. The training objective of (Huang et al. (2018); Lee et al. (2018)) is even more involved to handle the unsupervised setup.\nIn this work, we aim to simplify the training of general purpose multi-modal I2I translation networks, while also improving the diversity and expressiveness of different styles in the output domain. Our approach is inspired by the work of Meshry et al. (2019) which utilizes a staged training strategy to re-render scenes under different lighting, time of day, and weather conditions. We propose a pretraining approach for style encoders, in multi-modal I2I translation networks, which makes the training simpler and faster by requiring fewer losses/constraints. Our approach is also inspired by the standard training paradigm in visual recognition of first pretraining on a proxy task, either large supervised datasets (e.g., ImageNet) (Krizhevsky et al. (2012); Sun et al. (2017); Mahajan et al. (2018)) or unsupervised tasks (e.g., Doersch et al. (2015); Noroozi & Favaro (2016)), and then fine-tuning (transfer learning) on the desired task. Similarly, we propose to pretrain the encoder using a proxy task that encourages capturing style into a latent space. Our goal is to highlight the importance of pretraining for I2I networks and demonstrate that a simple approach can be very effective for multi-modal image synthesis. In particular, we make the following contributions:\n• We explore style pretraining and its generalization for the task of multi-modal I2I translation, which simplifies and speeds up the training compared to competing approaches.\n• We provide a study of the importance of different losses and regularization terms for multi-modal I2I translation networks.\n• We show that the pretrained latent embeddings is not dependent on the target domain and generalizes well to other domains (transfer learning).\n• We achieve state-of-the art results on several benchmarks in terms of style capture and transfer, and diversity of results." }, { "heading": "2 RELATED WORK", "text": "Deep generative models There has been incredible progress in the field of image synthesis using deep neural networks. In its unconditional setting, a decoder network learns to map random values drawn from a prior distribution (typically Gaussian) to output images. Variational Auto-Encoders (VAEs) (Kingma & Welling (2014)) assume a bijection mapping between output images and some latent distribution and learn to map the latent distribution to a unit Gaussian using the reparameterization trick. Alternatively, Generative Adversarial Networks (GANs) (Goodfellow et al. (2014)) directly map random values sampled from a unit Gaussian to images, while using a discriminator network to enforce that the distribution of generated images resembles that of real images. Recent works proposed improvements to stabilize the training (Gulrajani et al. (2017); Karnewar & Iyengar (2019); Mao et al. (2017); Radford et al. (2016)) and improve the quality and diversity of the output (Karras et al. (2018; 2019)). Other works combine both VAEs and GANs into a hybrid VAE-GAN model (Larsen et al. (2016); Rosca et al. (2017)).\nConditional image synthesis Instead of generating images from input noise, the generator can be augmented with side information in the form of extra conditional inputs. For example, Sohn et al. (2015) extended VAEs to their conditional setup (cVAEs). Also, GANs can be conditioned on different information, like class labels (Mirza & Osindero (2014); Odena et al. (2017); Van den Oord et al. (2016)), language description (Mansimov et al. (2016); Reed et al. (2016)), or an image from another domain (Chen & Koltun (2017); Isola et al. (2017)). The latter is called Image-to-Image translation.\nImage-to-Image (I2I) translation I2I translation is the task of transforming an image from one domain, such as a sketch, into a another domain, such as photo-realistic images. While there are regression-based approaches to this problem (Chen & Koltun (2017); Hoshen & Wolf (2018)), significant successes in this field are based on GANs and the influential work of pix2pix (Isola et al. (2017)). Following the success of pix2pix (Isola et al. (2017)), I2I translation has since been utilized in a large number of tasks, like inpainting (Pathak et al. (2016)), colorization (Zhang et al. (2016; 2017a)), super-resolution (Ledig et al. (2017)), rendering (Martin-Brualla et al. (2018); Meshry et al. (2019); Thies et al. (2019)), and many more (Dong et al. (2017); Wang & Gupta (2016); Zhang et al. (2017b)). There has also been works to extend this task to the unsupervised setting (Hoshen & Wolf\n(2018); Kim et al. (2017); Liu et al. (2017); Ma et al. (2019); Royer et al. (2017); Zhu et al. (2017a)), to multiple domains (Choi et al. (2018)), and to videos (Chan et al. (2018); Wang et al. (2018b)).\nMulti-modal I2I translation Image translation networks are typically deterministic function approximators that learn a one-to-one mapping between inputs and outputs. To extend I2I translation to the case of diverse multi-modal outputs, Zhu et al. (2017b) proposed the BicycleGAN framework that learns a latent distribution that encodes the variability in the output domain and conditions the generator on this extra latent vector for multi-modal image synthesis. Wang et al. (2018a;b) learn instance-wise latent features for different objects in a target image, which are clustered after training to find f fixed modes for different semantic classes. At test time, they sample one of the feature clusters for each object to achieve multi-modal synthesis. Other works extended the multi-modal I2I framework to the unpaired settings where images from the input and output domains are not in correspondence (Almahairi et al. (2018); Huang et al. (2018); Lee et al. (2018)) by augmenting BicycleGAN with different forms of a cross-cycle consistency constraint between two unpaired images. In our work, we focus on the supervised setting of multi-modal I2I translation. We propose a simple, yet effective, pretraining strategy to learn a latent distribution that encodes variability in the output domain. The learned distribution can be easily adapted to new unseen datasets with simple fine-tuning, instead of training from random initialization." }, { "heading": "3 APPROACH", "text": "Current multi-modal image translation networks require an extra input z that allows for modelling the one-to-many relation between an input domain A and an output domain B as a one-to-one relation from a pair of inputs (A, z) → B. In previous approaches, there has been a trade-off between simplicity and effectiveness for providing the input z. On one hand, providing random noise as the extra input z maintains a simple training objective (same as in pix2pix (Isola et al. (2017))). However, Isola et al. (2017); Zhu et al. (2017b) showed that the generator has little incentive to utilize the input vector z since it only encodes random information, and therefore the generator ends up ignoring z and collapsing to one or few modes. On the other hand, BicycleGAN (Zhu et al. (2017b)) combines the objectives of both conditional Variational Auto-Encoder GANs (cVAE-GAN) and conditional Latent Regressor GANs (cLR-GAN) to learn a latent embedding z simultaneously with the generator G. Their training enforces two cycle consistencies: B → z → B̂ and z → B̃ → ẑ. This proved to be very effective, but the training objective is more involved, which makes the training slower. Also, since the latent embedding is being trained simultaneously with the the generator, hyper-parameter tuning becomes more critical and sensitive.\nWe aim to combine the best of both worlds: an effective training of a latent embedding that models the distribution of possible outputs, while retaining a simple training objective. This would allow for faster and more efficient training, as well as less sensitivity to hyper-parameters. We observe that the variability in many target domains can be represented by the style diversity of images in the target domain B, where the style is defined in terms of the gram matrices used in the Neural Style Transfer literature (Gatys et al. (2016)). Then, we learn an embedding by separately training an encoder network E on an auxiliary task to optimize for z = E(IB) capturing the style of an image IB . Finally, since we now have learned a deterministic mapping between z and the style of the target\noutput image IB , training the generator G becomes simpler as G is just required to discover the correlation between output images and their corresponding style embedding z.\nTo incorporate this into BicycleGAN (Zhu et al. (2017b)), we replace the simultaneous training of the encoder E and the generator G with a staged training as follows:\nStage 1: Pretrain E on a proxy task that optimizes an embedding of images in the output domain B into a low-dimensional style latent space, such that images with similar styles lie closely in that space (i.e. clustered).\nStage 2: Train the generator network G while fixing the encoder E, so that G learns to associate the style of output images to their deterministic style embedding z = E(IB).\nStage 3: Fine-tune both the E and G networks together, allowing for the style embedding to be further adapted to best suit the image synthesis task for the target domain.\nThe intuition why such staged training would be effective for multi-modal I2I translation is that the encoder is pretrained to model different modes of the output distribution as clusters of images with similar styles (refer to the supp. material, figures 6,7, for a visualization of pretrained latent embeddings). During stage 2, the latent space is kept fixed, and the input latent to the generator can be used to clearly distinguish the style cluster to which the output belongs, which makes the multi-modal synthesis task easier for the generator. Finally, stage 3 finetunes the learned embedding to better serve the synthesis task at hand. Next, we explain how to pre-train the style encoder network E in Section 3.1, and how to train the generator G using the pre-learned embeddings (Section 3.2). Finally, we demonstrate the generalization of pre-training the style encoder E in Section 3.3." }, { "heading": "3.1 PRE-TRAINING THE STYLE ENCODER E", "text": "The goal of pre-training the encoder network E is to learn a deterministic mapping from the style of a target image IBi ∈ B to a latent style code zi = E(IBi ). Ideally, images with similar styles should be close in the style embedding space, while images with different styles should be far from each other. To supervise training such embedding, we utilize the style loss (Gatys et al. (2016)) as a distance metric to measure the style similarity between any two given images. The style encoder network E is then trained using a triplet loss (Schroff et al. (2015)), where the input is a triplet of images (Ia, Ip, In), where (Ia, Ip) have similar style, while (Ia, In) have different style, as measured by the style loss metric. The training objective for E is given by:\nLtri(Ia, Ip, In) =max ([ ‖za − zp‖2 − ‖za − zn‖2 + α ] , 0 ) + λLreg (za, zp, zn) (1)\nwhere α is a separation margin, λ is a relative weighting parameter between the main triplet objective and an optional regularization term Lreg(·) which is an L2 regularization to encourage learning a compact latent space.\nTriplet selection. To generate triplets for pre-training the encoder E, we compute the set of kc closest and kf furthest neighbors for each anchor image Ia as measured by the style loss. Then, for each anchor image Ia, we randomly sample a positive image Ip and a negative image In from the set of closest and furthest neighbors respectively. We found that, for large datasets, it is sufficient to generate triplets for a subset of the training images. One challenge is the set of images with an outlier style. Such images will be furthest neighbors to most images, and can mislead the training by just projecting outlier images to separate clusters. To deal with this, we sample the negative style image In from a larger set of furthest neighbors; while the positive image Ip is sampled from a small set of closest neighbors so that it would have reasonable style similarity to the anchor image." }, { "heading": "3.2 GENERATOR TRAINING", "text": "After pre-training the style encoder E (stage 1), we have established a mapping from images in the output domain, IB ∈ B, to their style-embedding z = E(IB). Feeding the style embedding as input to the generator during training, the generator has good incentive to associate the style of output images to their corresponding style embedding instead of learning to hallucinate the style. It’s important to retain the deterministic correspondence between images and their style codes to facilitate the job of the generator to discover this correlation. This is why, during stage 2, we keep the weights of the style encoder E fixed. The forward pass reconstructs a training image IBi as Î B i = G(I A i , zi), where zi = E(IBi ). The training objective is similar to that of pix2pix (Isola et al. (2017)),\nLimg(IBi , ÎBi ) = LcGAN(IBi , ÎBi ) + λrecLrec(IBi , ÎBi ) (2)\nwhere we use the Least Square GAN loss (LSGAN) (Mao et al. (2017)) for the LcGAN term, and a VGG-based perceptual loss (Johnson et al. (2016)) for the reconstruction term Lrec. Once the generator has learned to associate the output style with the input style embedding, stage 3 fine-tunes both the generator G and the style encoder E together using the same objective (2).\nStyle sampling. To perform multimodal synthesis on a given input at test time, we can capture the latent vector z from any existing image and transfer the style to the generated image. However, if we wish to sample styles directly from the latent distribution, we can optionally enforce a prior on the latent distribution. For example, we can add a KL divergence term on the latent vectors to enforce a unit Gaussian prior. In our experiments, we found it more effective to add an L2 regularization on the the latent vectors to enforce zero-mean embeddings and limit the variance of the latent space. We then compute an empirical standard deviation for sampling. Another alternative to enable sampling is to train a mapper networkM to map the unit Gaussian to the latent distribution. This can be done as a post-processing step after the style encoder has been trained and finetuned. Specifically, we follow the nearest-neighbor based Implicit Maximum Likelihood Estimation (IMLE) training (Li & Malik (2018); Hoshen et al. (2019)) to train the mapper networkM. The training objective is given by:\nM = argmin M̃ ∑ i ‖zi − M̃(ei)‖22, ei = argmin rj ‖zi −M(rj)‖22 (3)\nwhere {rj} is a set of random samples from the unit Gaussian prior, and for each latent code zi, we select ei that maps to the nearest neighborM(ei) to zi." }, { "heading": "3.3 GENERALIZING THE PRE-TRAINING STAGE", "text": "The goal of the style pretraining is to learn an embedding that mimics the style loss, where images with similar style lie closely in that space. Since the definition of image style in neural style transfer literature is general and is not dependent on a specific image domain, encoding an image I to its style embedding can also be seen as a general task that is independent of the output domain B. This allows for performing the pretraining stage only once using auxiliary training data. The finetuning stage eventually tweaks the embedding to better suit the specific target domain B. We show experimentally in Section 4 that pretraining the style encoder on datasets other than the target domain B doesn’t degrade the performance. It can even improve the performance if the target dataset is small, in which case pretraining on an auxiliary dataset helps with the generalization of the overall model." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "Datasets. We evaluate our approach on five standard I2I translation benchmarks used in Isola et al. (2017); Zhu et al. (2017b); Architectural labels→ photo, aerial→map, edges→ shoes/handbags and night→ day. In addition, we use the Space Needle timelapse dataset (Martin-Brualla, 2007), which consists of 2068 paired images with a 8280 × 1080 resolution, where the input domain includes images with temporally smoothed appearance, and the output domain contains real images spanning different lighting and weather conditions. Baselines. While we report numbers for retrained models using the code released with BicycleGAN (BicycleGAN v0) for completeness, we mainly compare to two stronger baselines:\n• BicycleGAN v1: we implement BicycleGAN using the same network architecture as used in our approach to have a fair comparison.\n• BicycleGAN v2: we augment BicycleGAN with the cross-cycle consistency constraint introduced in (Huang et al., 2018; Lee et al., 2018) as follows: the input is a pair of training examples (IA1 , I B 1 ), (I A 2 , I B 2 ) for which we obtain their respective style embeddings\nz1 = E(I B 1 ), z2 = E(I B 2 ). We then apply a 2-step cyclic reconstruction of I B 1 , I B 2 ; in the first step, we generate both images with a swapped style u = G(IA1 , z2), v = G(I A 2 , z1). In the the second step, we re-capture the latent style vectors ẑ2 = E(u), ẑ1 = E(v) and generate the original images IB1 , I B 2 by swapping the style again: Î B 1 = G(I A 1 , ẑ1), ÎB2 = G(I A 2 , ẑ2). We add a cyclic reconstruction term for Î B 1 , Î B 2 ." }, { "heading": "4.1 EVALUATION", "text": "Image reconstruction. We report the reconstruction quality of validation set images, using both PSNR and AlexNet-based LPIPS (Zhang et al., 2018) metrics, in Table 1. Note that our results without fine-tuning (stage 2) are on-par-with the baselines, which verifies the validity of our approach and\nthat style-based encoder pre-training successfully learns to distinguish different modes in the output domain, which proves effective for training multi-modal I2I networks. Fine-tuning (stage 3) further improves our results compared to the baselines. Figure 2 shows qualitatively how our approach reconstructs the target style more faithfully.\nStyle transfer and sampling. Figure 3 shows style transfer to validation set images from different datasets. We can also sample random styles directly from the the latent distribution as described in Section 3.2. Figure 5 shows results for both adhoc sampling from the assumed N(µ, σ) empirical distribution, as well as formally sampling from a unit Gaussian using the mapper networkM. While both results look good, we note that the assumption for adhoc sampling is not explicitly enforced, and thus could sample bad style codes outside the distribution (see Appendix A.5 for examples).\nStyle interpolation. Figure 4 shows style interpolation by linearly interpolating between two latent vectors. Note the smooth change in lighting and cloud patterns when going from cloudy to sunny (second row).\nPre-training generalization. Since the notion of style, as defined in the Neural Style Transfer literature, is universal and not specific to a certain domain, we hypothesized that style-based encoder pretraining would learn a generic style embedding that can generalize across multiple domains and be effective for multi-modal image I2I translation. Here, we experimentally verify our hypothesis in Table 2. For a target dataset, we train the generator G three times, each with different pre-training of the style encoderE: (1) same dataset pre-training: pre-trainE using the output domainB of the target dataset. (2) similar-domain pre-training: pre-train on a different dataset, but whose output domain bears resemblance to the output domain of the target dataset (e.g., edges2shoes and edges2handbags, or day images from night2day and the Space Needle timelapse dataset). (3) different-domain pretraining: pre-train on a different dataset whose output domain has different styles from that of the target dataset (e.g., edges2handbags and the Space Needle timelapse datasets, or night2day and edges2handbags datasets). Table 2 shows that without fine-tuning (stage 2), the edges2handbags dataset shows a slight performance degradation when going from pre-training on the same dataset, to pre-training on a similar-domain dataset, and finally pre-training on a different-domain dataset. On the other hand, the night2day dataset has only ∼100 unique scenes for training. So, pre-training on another dataset such as Space Needle generalizes better to new scenes in the validation set, since it helps avoid overfitting the small number of unique scenes in the training set. After fine-tuning,\nperformance differences further reduce to be insignificant. We also investigate the generalization of encoder pre-training using non-style distance metrics in Appendix A.4. Ablative study. We investigate the role of different loss terms as we transition from the loss setup of baselines to that of our training approach. We first remove the variational part in both BicycleGAN v1 and BicycleGAN v2 baselines resulting in Bicycle v1.2, v2.2. We further remove the Gaussian prior and replace the KL loss with an L2 regularization in Bicycle v1.3, v2.3. To maintain random latent vector sampling during training without a prior, we sample a random training image, and use its style code. We define different versions of our approach (v1, v2, v3, and v4) based on different loss setup during training as follows: we start with ‘Ours v1’, which has the same setup as Bicycle v2.3, except that it uses pre-trained embeddings as described in Section 3.1. We then remove cyclic reconstruction, random z sampling, and L2 regularization terms resulting in ‘Ours v2’, ‘v3’, and ‘v4’ respectively. We run each setup on the edges2handbags dataset. In order to draw more reliable conclusions, we repeat each experiment 3 times and report the mean and standard deviation in Table 4. We notice that removing the variational part in VAEs is enough to achieve good results. While VAEs in general are robust to noise in the input latent, we observe that this comes at the expense of the expressiveness of the latent space (e.g., less faithful style capture and transfer), especially for low dimensional latents. We also observe that our approach generally performs better with less constraints (loss terms). For example, “Ours v1, v2” have lower results than their “Bicycle v1.3, v2.3” counterparts. This shows that the main benefit of pre-trained embeddings is when the network is less constrained.\nDiversity and user study. We evaluate diversity by computing the average LPIPS distance over 1600 output images. We measure diversity on two setups: we sample 100 validation images, and (1) apply style transfer from 16 randomly sampled images, or (2) we sample 16 random codes using the mapper networkM to obtain 1600 outputs. We also measure the realism and faithfulness of style transfer through a user study, where 30 participants are shown an input shoe sketch, an input style image and two style transfer outputs. They are given unlimited time to choose which output looks more realistic, and if both are realistic, then which transfers the style more faithfully. We fix ‘Ours v4’ approach as anchor and compare other methods to it. Table 3 shows that the baselines achieve lower diversity and user preference compared to our approach, specially in the style transfer setup. Different variations of our method, except for ‘Ours v2’ yield similar diversity and user preference scores. We observe that ‘Ours v2’ shows artifacts in some outputs, leading to higher diversity but lower user preference. Our diversity results for the style sampling setup have some variation and are sensitive the mapper network training, but are still either on-par or better than the baselines.\nConclusion. We investigated the effectiveness of embedding pre-training for the task of multimodal I2I translation. The pre-training can be done once on auxiliary data, and generalizes well to other domains. This allows for a faster training of I2I translation networks with fewer losses and achieves more faithful style capture and transfer. Furthermore, we studied the contribution of different loss terms, where we discovered that noise added by a variational auto-encoder can limit the expressiveness of low-dimensional latent spaces. Finally, we achieved state-of-the-art results on several benchmarks." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nThe generator network G has a symmetric encoder-decoder architecture based on Wang et al. (2018a), with extra skip connections by concatenating feature maps of the encoder and decoder. We use a multiscale-patchGAN discriminator (Wang et al., 2018a) with 3 scales and employ a LSGAN (Mao et al., 2017) loss. The mapper network M is a multi-layer perceptron (MLP) with three 128- dimensional hidden layers and a tanh activation function. For the reconstruction loss, we use the perceptual loss (Johnson et al., 2016) evaluated at convi,2 for i ∈ [1, 5] of VGG (Simonyan & Zisserman, 2014) with linear weights of wi = 1/26−i for i ∈ [1, 5]. The architecture of the style encoder E is adopted from Lee et al. (2018), and we use a latent style vector z ∈ R8. Our optimizers setup is similar to that in Zhu et al. (2017b). We use three Adam optimizers: one for the generator G and encoder E, another for the discriminator D, and another optimizer for the generator G alone with β1 = 0, β2 = 0.99 for the three optimizers, and learning rates of 0.001, 0.001 and 0.0001 respectively. We use a separate Adam optimizer for the mapper networkM with β1 = 0.5, β2 = 0.99, and a learning rate of 0.01 with a decay rate of 0.7 applied every 50 steps. Relative weights for the loss terms are λcGAN = 1, λrec = 0.02 and λL2 = 0.01 for the GAN loss, reconstruction loss, and L2 latent vector regularization respectively. When sampling triplets for any anchor image Ic, we use kc = 5, kf = 13 for the size of the set of close and far neighbors respectively." }, { "heading": "A.2 MORE QUANTITATIVE COMPARISON", "text": "We report the Inception Score (IS) computed over the validation set of various datasets in Table 5. Surprisingly, results after finetuning (“ours - stage 3”) are slightly worse than those before finetuning (“ours - stage 2”), but both are still better than the baselines except for the maps dataset. We also note the Inception Score is not very suited to image-to-image translation tasks, since it prefers output diversity with respect to ImageNet classes, not within-class diversity as in our case." }, { "heading": "A.3 LATENT SPACE VISUALIZATION", "text": "Figure 6a visualizes the latent space learned by the style encoder E after pretraining and before finetuning (a), after finetuning (b), and the latent space learned by BicycleGAN (Zhu et al., 2017b) (c). The embedding learned through pretraining and before finetuning shows meaningful clusters. Finetuning further brings the embedding closer to that of BicycleGAN." }, { "heading": "A.4 ENCODER PRE-TRAINING WITH NON-STYLE METRICS", "text": "Pre-training the encoder using a style-based triplet loss showed to be successful for multi-modal image translation tasks where the variability in the target domain is mainly color-based. This is shown in the results obtained on several benchmarks, even before the finetuning stage (“ours - stage 2” in Table 1). We note though that the usage of style-loss as a distance metric for triplet sampling is just one choice and can be replaced with other distance metrics depending on the target application. Triplet sampling with style distance results in learning an embedding space where images with similar colors/styles lie closely in that space as shown in Section A.3. If, for example, we sample triplets instead based on the distance between VGG-Face (Parkhi et al., 2015) embeddings, the encoder will learn a latent space which is clustered by identity. In this section, we aim to validate that the proposed pre-training strategy can be extended to multi-modal image-to-image translation tasks with non-style variability. We inspect the task of manipulating facial expressions, where the input is a\nneutral face, and the output can have other emotions or facial expressions. For this task, similar emotions should be embedded closely in the latent space. We therefore use an off-the-shelf facial expression recognition system to compute the emotion similarity/distance between any pair of images. Specifically, we compute the emotion distance as the euclidean distance between the 512-dimensional feature map of the last layer of a pretrained classification network (e.g., (Jie, 2018)). We visualize the learned latent space in Figure 7, which shows clusters with similar emotions or facial expressions. We also show example translation results on a holdout set of the front-view images of the KDEF dataset (KDEF, 2017) in Figure 8. We note that the generator successfully learns to manipulate facial expressions based solely on the pre-trained embeddings (without the finetuning stage). On the other hand, the BicycleGAN-based baselines collapsed to a single mode (over 3 different runs). This shows that our staged-training approach is stable and not sensitive to hyper-parameters, unlike the BicycleGAN baselines which will require careful hyper-parameter tuning to work properly on this task. We also point out that the poor output quality is mainly due to using a pixel-wise reconstruction loss for the generator training, while the input-output pairs in this dataset are not aligned. We didn’t investigate improving the generator training as this is orthogonal to verifying the generalization of encoder pre-training." }, { "heading": "A.5 STYLE SAMPLING COMPARISON", "text": "Figure 9 compares style sampling using the mapper networkM vs adhoc sampling from the assumed N(µ, σ) of an L2-regularized latent space, where µ, σ are empirically computed from the training set. Note that adhoc sampling can sometimes sample bad style codes outside the distribution (e.g. third image in first row, and first image in third row in the right side of Figure 9), since the assumption that a L2-regularized space would yield normally distributed latents with zero mean and low standard deviation is not explicitly enforced." }, { "heading": "A.6 TRAINING TIME", "text": "Simplifying the training objective allows for faster training time, as well as a larger batch size due to lower memory usage. Table 6 shows the processing time per 1000 training images for the baselines as well as different variations of our approach as defined in Table 4." }, { "heading": "A.7 CONVERGENCE ANALYSIS", "text": "Figure 10 compares the convergence of our staged training compared to the BicycleGAN baselines. The dotted line in the graph marks the transition between stages 2 and 3 of our training (i.e, switching from a fixed pre-trained encoder E to finetuning both G and E together). We measure the reconstruction error (LPIPS) of the validation set of the edges2handbags dataset as the training progresses. Results show that with a fixed pre-trained encoder, staged training starts with higher error than the baselines, but quickly drops to show similar performance as the baselines, and even beats the baselines before switching to stage 3 (marked by a dotted line). When starting to finetune the encoder E, we get a spike in the reconstruction error as the network adapts to the shift in the pre-trained embeddings, but then the performance of the staged training steadily widens the performance gap with the baselines.\nThis shows the importance of the finetuning stage to tweak the pre-trained embeddings to better serve the image synthesis task." } ]
2,019
null
SP:8ec794421e38087b73f7d7fb4fbf373728ea39c7
[ "This paper considers learning low-dimensional representations from high-dimensional observations for control purposes. The authors extend the E2C framework by introducing the new PCC-Loss function. This new loss function aims to reflect the prediction in the observation space, the consistency between latent and observation dynamics, and the low curvature in the latent dynamics. The low curvature term is used to bias the latent dynamics towards models that can be better approximated as locally linear models. The authors provide theory (error bounds) to justify their proposed PCC-Loss function. Then variational PCC is developed to make the algorithm tractable. The proposed method is evaluated in 5 different simulated tasks and compared with the original E2C method and the RCE method. ", "This work proposes a regularization strategy for learning optimal policy for a dynamic control problem in a latent low-dimensional domain. The work is based on LCE approach, but with in-depth analysis on how to choose/design the regularization for the \\hat{P} operator, which consists of an encoder, a decoder, and dynamics in the latent space. In particular, the author argued that three principles (prediction, consistency, and curvature) should be taken into consideration when designing the regularizer of the learning cost function - so that the learned latent domain can serve better for the purpose of optimizing the long-term cost in the ambient domain. " ]
Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lowerdimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.
[ { "affiliations": [], "name": "LOCALLY-LINEAR CONTROL" }, { "affiliations": [], "name": "Nir Levine" }, { "affiliations": [], "name": "Yinlam Chow" }, { "affiliations": [], "name": "Rui Shu" }, { "affiliations": [], "name": "Ang Li" }, { "affiliations": [], "name": "Mohammad Ghavamzadeh" }, { "affiliations": [], "name": "Hung Bui" } ]
[ { "authors": [ "E. Banijamali", "R. Shu", "M. Ghavamzadeh", "H. Bui", "A. Ghodsi" ], "title": "Robust locally-linear controllable embedding", "venue": "In Proceedings of the Twenty First International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Dimitri Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific,", "year": 1995 }, { "authors": [ "Francesco Borrelli", "Alberto Bemporad", "Manfred Morari" ], "title": "Predictive control for linear and hybrid systems", "venue": null, "year": 2017 }, { "authors": [ "Morten Breivik", "Thor I Fossen" ], "title": "Principles of guidance-based path following in 2d and 3d", "venue": "In Proceedings of the 44th IEEE Conference on Decision and Control,", "year": 2005 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Roy De Maesschalck", "Delphine Jouan-Rimbaud", "Désiré L Massart" ], "title": "The mahalanobis distance", "venue": "Chemometrics and intelligent laboratory systems,", "year": 2000 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Sudeep Dasari", "Annie Xie", "Alex Lee", "Sergey Levine" ], "title": "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control", "venue": "arXiv preprint arXiv:1812.00568,", "year": 2018 }, { "authors": [ "Bernard Espiau", "François Chaumette", "Patrick Rives" ], "title": "A new approach to visual servoing in robotics", "venue": "ieee Transactions on Robotics and Automation,", "year": 1992 }, { "authors": [ "Chelsea Finn", "Xin Yu Tan", "Yan Duan", "Trevor Darrell", "Sergey Levine", "Pieter Abbeel" ], "title": "Deep spatial autoencoders for visuomotor learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Katsuhisa Furuta", "Masaki Yamakita", "Seiichi Kobayashi" ], "title": "Swing up control of inverted pendulum", "venue": "In Proceedings IECON’91:", "year": 1991 }, { "authors": [ "Yarin Gal", "Rowan McAllister", "Carl Edward Rasmussen" ], "title": "Improving pilco with bayesian neural network dynamics models", "venue": "In Data-Efficient Machine Learning workshop, ICML,", "year": 2016 }, { "authors": [ "Shlomo Geva", "Joaquin Sitte" ], "title": "A cartpole experiment benchmark for trainable controllers", "venue": "IEEE Control Systems Magazine,", "year": 1993 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thanard Kurutach", "Aviv Tamar", "Ge Yang", "Stuart J Russell", "Pieter Abbeel" ], "title": "Learning plannable representations with causal infogan", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xuzhi Lai", "Ancai Zhang", "Min Wu", "Jinhua She" ], "title": "Singularity-avoiding swing-up control for underactuated three-link gymnast robot using virtual coupling between control torques", "venue": "International Journal of Robust and Nonlinear Control,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Kevin P Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Erik Ordentlich", "Marcelo J Weinberger" ], "title": "A distribution dependent refinement of pinsker’s inequality", "venue": "IEEE Transactions on Information Theory,", "year": 2005 }, { "authors": [ "Marek Petrik", "Mohammad Ghavamzadeh", "Yinlam Chow" ], "title": "Safe policy improvement by minimizing robust baseline regret", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "James Blake Rawlings", "David Q Mayne" ], "title": "Model predictive control: Theory and design", "venue": "Nob Hill Pub. Madison, Wisconsin,", "year": 2009 }, { "authors": [ "Alexander Shapiro", "Darinka Dentcheva", "Andrzej Ruszczyński" ], "title": "Lectures on stochastic programming: modeling and theory", "venue": null, "year": 2009 }, { "authors": [ "Mark W Spong" ], "title": "The swing up control problem for the acrobot", "venue": "IEEE control systems magazine,", "year": 1995 }, { "authors": [ "Manuel Watter", "Jost Springenberg", "Joschka Boedecker", "Martin Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Bernhard Wymann", "Eric Espié", "Christophe Guionneau", "Christos Dimitrakakis", "Rémi Coulom", "Andrew Sumner" ], "title": "Torcs, the open racing car simulator. Software available at http://torcs", "venue": "sourceforge. net,", "year": 2000 }, { "authors": [ "Marvin Zhang", "Sharad Vikram", "Laura Smith", "Pieter Abbeel", "Matthew J Johnson", "Sergey Levine" ], "title": "Solar: Deep structured latent representations for model-based reinforcement learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "UXRLLC(P" ], "title": "Therefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p", "venue": null, "year": 1995 }, { "authors": [ "T−1} ← iLQR(L(U" ], "title": "zt)), with zt ∼ E(·|xt), where iLQR(`Control(U ; P̃ ∗, z)) denotes the iLQR algorithm with initial latent state z. To understand the performance of this policy w.r.t. the MDP problem, we refer to the sub-optimality bound of iLQR (w.r.t. open-loop control problem in (SOC1)) in Section", "venue": null, "year": 2017 }, { "authors": [ "noise. D" ], "title": "DESCRIPTION OF THE DOMAINS Planar System In this task the main goal is to navigate an agent in a surrounded area on a 2D plane (Breivik & Fossen, 2005), whose goal is to navigate from a corner to the opposite one, while avoiding the six obstacles in this area. The system is observed through a set of 40 × 40 pixel images", "venue": null, "year": 2005 }, { "authors": [ "Hafner" ], "title": "2018), we added a deterministic loss term in the form of cross", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016), scaling these methods to high-dimensional environments remains an open challenge.\nThe recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019), and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018). This can be later combined with RL for extra fine-tuning of the model and control.\nIn this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect ∗Equal contribution. Correspondence to nirlevine@google.com\nthe process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. (2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. (3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. (4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature." }, { "heading": "2 PROBLEM FORMULATION", "text": "We are interested in controlling the non-linear dynamical systems of the form st+1 = fS(st, ut) +w, over the horizon T . In this definition, st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ {0, . . . , T − 1}, w is the Gaussian system noise, and fS is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st (nx ns). This scenario has application in many real-world problems, such as visual-servoing (Espiau et al., 1992), in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = {ut}T−1t=0 , the observation sequence {xt}Tt=0 is generated by a stationary Markov process, i.e., xt+1 ∼ P (·|xt, ut), ∀t ∈ {0, . . . , T − 1}.1\nA common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem (Shapiro et al., 2009) that minimizes expected cumulative cost:\nmin U\nL(U,P, c, x0) := E [ cT (xT ) + T−1∑ t=0 ct(xt, ut) | P, x0 ] , 2 (SOC1)\nwhere ct : X ×U → R≥0 is the immediate cost function at time t, cT ∈ R≥0 is the terminal cost, and x0 is the observation at the initial state s0. Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0. For example, in visualservoing, (SOC1) can be formulated as a goal tracking problem (Ebert et al., 2018), where we control the robot to reach the goal observation xgoal, and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E[ ∑ t ‖xt − xgoal‖2 | P, x0].\nSince the observations x are high dimensional and the dynamics in the observation space P (·|xt, ut) is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space Z ⊆ Rnz (nz nx) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), and SOLAR (Zhang et al., 2019). The main idea behind the LCE approach is to learn a triplet, (i) an encoderE : X → P(Z); (ii) a dynamics in the latent space F : Z ×U → P(Z); and (iii) a decoder D : Z → P(X ). These in turn can be thought of as defining a (stochastic) mapping P̂ : X ×U → P(X ) of the form P̂ = D ◦F ◦E. We then wish to solve the SOC in latent space Z:\nmin U,P̂\nE [ L(U,F, c, z0) | E, x0 ] + λ2 √ R2(P̂ ), (SOC2)\nsuch that the solution of (SOC2), U∗2 , has similar performance to that of (SOC1), U ∗ 1 , i.e., L(U∗1 , P, c, x0) ≈ L(U∗2 , P, c, x0). In (SOC2), z0 is the initial latent state sampled from the encoder E(·|x0); c̄ : Z × U → R≥0 is the latent cost function defined as c̄t(zt, ut) =∫ ct(xt, ut)dD(xt|zt); R2(P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding\n1A method to ensure this Markovian assumption is by buffering observations (Mnih et al., 2013) for a number of time steps.\n2See Appendix B.3 for the extension to the closed-loop MDP problem.\ntion SOC2 under dynamics F , and (c)(red) in equation SOC3 under dynamics P̂ .\nregularization parameter. We will define R2 and λ2 more precisely in Section 3. Note that the expectation in (SOC2) is over the randomness generated by the (stochastic) encoder E." }, { "heading": "3 PCC MODEL: A CONTROL PERSPECTIVE", "text": "As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics P , as shown at the bottom row of Figure 1(a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations x, LCE proposes to learn a mapping P̂ by solving (SOC2) that consists of a loss function, whose states evolve under dynamics F (after an initial transition by encoder E), as depicted in Figure 1(b), and a regularization term. The role of the regularizer R2 is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E, described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer R2 to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning P̂ , designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space Z . In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model.\nNote that these two SOCs evolve in two different spaces, one in the observation space X under dynamics P , and the other one in the latent space Z (after an initial transition from X to Z) under dynamics F . Unlike P and F that only operate in a single space, X and Z , respectively, P̂ can govern the evolution of the system in both X and Z (see Figure 1(c)). Therefore, any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms, to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the result of learning P̂ in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR (Li & Todorov, 2004)) to solve SOC, and add the third term, curvature, to our recipe for learning P̂ ." }, { "heading": "3.1 PREDICTION OF THE NEXT OBSERVATION", "text": "Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under P and P̂ , where xt is the current observation, and xt+1 and x̂t+1 are the next observations under these two dynamics, respectively. Instead of learning a P̂ with minimum mismatch with P in terms of some distribution norm, we propose to learn P̂ by solving the following SOC:\nmin U,P̂ L(U, P̂ , c, x0) + λ3\n√ R3(P̂ ), (SOC3)\nwhose loss function is the same as the one in (SOC1), with the true dynamics replaced by P̂ . In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term R3 in (SOC3), such that the control sequence resulted from solving (SOC3), U∗3 , has similar performance to the solution of (SOC1), U∗1 , i.e., L(U ∗ 1 , P, c, x0) ≈ L(U∗3 , P, c, x0).\nLemma 1. Let U∗1 be a solution to (SOC1) and (U∗3 , P̂ ∗3 ) be a solution to (SOC3) with R3(P̂ ) = Ex,u [ DKL ( P (·|x, u)||P̂ (·|x, u) )] and λ3 = √ 2U · T 2cmax. (1)\nThen, we have L(U∗1 , P, c, x0) ≥ L(U∗3 , P, c, x0)− 2λ3 √ R3(P̂ ∗3 ).\nIn Eq. 1, the expectation is over the state-action stationary distribution of the policy used to generate the training samples (uniformly random policy in this work), and U is the Lebesgue measure of U .3\n3In the case when sampling policy is non-uniform and has no measure-zero set, 1/U is its minimum measure." }, { "heading": "3.2 CONSISTENCY IN PREDICTION OF THE NEXT LATENT STATE", "text": "In Section 3.1, we provided a recipe for learning P̂ (in form of D ◦ F ◦ E) by introducing an intermediate (SOC3) that evolves in the observation space X according to dynamics P̂ . In this section we first connect (SOC2) that operates in Z with (SOC3) that operates in X . For simplicity and without loss generality, assume the initial cost c0(x, u) is zero.4 Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models.\nLemma 2. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′2(P̂ ) = Ex,u [ DKL (( E ◦ P̂ ) (·|x, u)|| ( F ◦ E ) (·|x, u) )] and λ2 = √ 2U · T 2cmax. (2)\nThen, we have L(U∗3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0)− 2λ2 √ R′2(P̂ ∗ 2 ) .\nSimilar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, ( E ◦ P̂ ) (z′|x, u) = ∫ x′ E(z′|x′)dP̂ (x′|x, u)\nand ( F ◦E ) (z′|x, u) = ∫ z F (z′|z, u)dE(z|x) are the probability over the next latent state z′, given the current observation x and action u, in (SOC2) and (SOC3) (see the paths xt → zt → z̃t+1 and xt → zt → z̃t+1 → x̂t+1 → ẑt+1 in Figures 1(b)(green) and 1(c)(red)). Therefore R′2(P̂ ) can be interpreted as the measure of discrepancy between these models, which we term as consistency loss.\nAlthough Lemma 2 provides a recipe to learn P̂ by solving (SOC2) with the regularizer (2), unfortunately this regularizer cannot be computed from the data – that is of the form (xt, ut, xt+1) – because the first term in the DKL requires marginalizing over current and next latent states (zt and z̃t+1 in Figure 1(c)). To address this issue, we propose to use the (computable) regularizer\nR′′2 (P̂ ) = Ex,u,x′ [ DKL ( E(·|x′)|| ( F ◦ E ) (·|x, u) )] , (3)\nin which the expectation is over (x, u, x′) sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss resulted from using R′′2 (P̂ ) instead of R ′\n2(P̂ ), and shows that it could be still a reasonable choice. Corollary 1. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′′2 (P̂ ) and and λ2 defined by (3) and (2). Then, we have L(U ∗ 3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0) −\n2λ2 √ 2R′′2 (P̂ ∗ 2 ) + 2R3(P̂ ∗ 2 ) .\nLemma 1 suggests a regularizer R3 to connect the solutions of (SOC1) and (SOC3). Similarly, Corollary 1 shows that regularizer R′′2 in (3) establishes a connection between the solutions of (SOC3) and (SOC2). Putting these results together, we achieve our goal in Lemma 3 (see Appendix A.4, for proof) to design a regularizer for (SOC2), such that its solution performs similarly to that of (SOC1).\nLemma 3. Let U∗1 be a solution to (SOC1) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with\nR2(P̂ ) = 3R3(P̂ ) + 2R ′′ 2 (P̂ ) and λ2 = 2 √ U · T 2cmax, (4)\nwhere R3(P̂ ) and R′′2 (P̂ ) are defined by (1) and (3). Then, we have L(U∗1 , P, c, x0) ≥ L(U∗2 , P, c, x0)− 2λ2 √ R2(P̂ ∗2 ) ." }, { "heading": "3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION", "text": "In Sections 3.1 and 3.2, we derived a loss function to learn the latent space Z . This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a result, to the loss function of our PCC model.\nThe main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate\n4With non-zero initial cost, similar results can be derived by having an additional consistency term on x0.\nthe next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics P̂ and assume that it is not only of the form P̂ = D ◦ F ◦ E, but F , the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function fZ(z, u). Assume F (z, u) = fZ(z, u) + w, where w is a Gaussian noise. Consider the following SOC problem:\nmin U,P̂\nE [L(U,F, c, z0) | E, x0] + λLLC √ R2(P̂ ) +RLLC(P̂ ) , (SOC-LLC)\nwhere R2 is defined by (4); U is optimized by a LLC algorithm, such as iLQR; RLLC(P̂ ) is given by, RLLC(P̂ ) = Ex,u [ E [ fZ(z + z, u+ u)− fZ(z, u)− (∇zfZ(z, u) · z +∇ufZ(z, u) · u)‖22 ] | E ] , (5)\nwhere = ( z, u)> ∼ N (0, δ2I), δ > 0 is a tunable parameter that characterizes the “diameter\" of latent state-action space in which the latent dynamics model has low curvature. λLLC = 2 √ 2T 2cmax √ U max ( clip(1 + √ 2 log(2T/η)) √ X/2, 1 ) , where 1/X is the minimum\nnon-zero measure of the sample distribution w.r.t. X , and 1− η ∈ [0, 1) is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how δ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn P̂ , and also the latent space Z . Lemma 4. Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action trajectory {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t)}T−1t=0 is the optimal trajectory of (SOC2). Then with proba-\nbility 1− η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0)− 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) .\nIn practice, instead of solving (SOC-LLC) jointly for U and P̂ , we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for P̂ , i.e.,\nP̂ ∗ ∈ arg min P̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ), (PCC-LOSS)\nwhere R′3(P̂ ) = −Ex,u,x′ [log P̂ (x′|x, u)] is the negative log-likelihood,5 and then, solve the outer optimization problem, minU L(U, F̂ ∗, c̄, z0), where P̂ ∗ = D̂∗◦F̂ ∗◦Ê∗, to obtain the optimal control sequence U∗. Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter λLLC is large. Note that we leave the regularization parameters (λp, λc, λcur) as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning P̂ ∗ in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over fZ , through the regularizers R′3, R ′′\n2 , and RLLC, respectively, we refer to it as the prediction-consistency-curvature (PCC) loss." }, { "heading": "4 INSTANTIATING THE PCC MODEL IN PRACTICE", "text": "The PCC-Model objective in (PCC-LOSS) introduces the optimization problem minP̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ). To instantiate this model in practice, we describe P̂ = D ◦ F ◦ E as a latent variable model that factorizes as P̂ (xt+1, zt, ẑt+1 | xt, ut) = P̂ (zt | xt)P̂ (ẑt+1 | zt, ut)P̂ (xt+1 | ẑt+1). In this section, we propose a variational approximation to the intractable negative log-likelihood R′3 and batch-consistency R ′′ 2 losses, and an efficient approximation of the curvature loss RLLC." }, { "heading": "4.1 VARIATIONAL PCC", "text": "The negative log-likelihood 6 R′3 admits a variational bound via Jensen’s Inequality,\nR′3(P̂ ) = − log P̂ (xt+1 | xt, ut) = − logEQ(zt,ẑt+1|xt,ut,xt+1) [ P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ]\n≤ −EQ(zt,ẑt+1|xt,ut,xt+1)\n[ log\nP̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1)\n] = R′3,NLE-Bound(P̂ , Q), (6)\n5Since R3(P̂ ) is the sum of R′3(P̂ ) and the entropy of P , we replaced it with R′3(P̂ ) in (PCC-LOSS). 6For notation convenience, we drop the expectation over the empirical data that appears in various loss terms.\nwhich holds for any choice of recognition model Q. For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as Q(zt, ẑt+1|xt, xt+1, ut) = Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut). The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in (6) via Monte Carlo simulation. To reduce the variance of the estimator, we decompose R′3,NLE-Bound further into\n− EQ(ẑt+1|xt+1) [ log P̂ (xt+1|ẑt+1) ] + EQ(ẑt+1|xt+1) [ DKL ( Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt) )] −H (Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1)\nQ(zt|ẑt+1,xt,ut)\n[ log P̂ (ẑt+1 | zt, ut) ] ,\nand note that the Entropy H(·) and Kullback-Leibler DKL(·‖·) terms are analytically tractable when Q is restricted to a suitably chosen variational family (i.e. in our experiments, Q(ẑt+1 | xt+1) and Q(zt | ẑt+1, xt, ut) are factorized Gaussians). The derivation is provided in Appendix C.1. Interestingly, the consistency loss R′′2 admits a similar treatment. We note that the consistency loss seeks to match the distribution of ẑt+1 | xt, ut with zt+1 | xt+1, which we represent below as\nR′′2 (P̂ ) = DKL ( P̂ (zt+1 | xt+1)‖P̂ (ẑt+1 | xt, ut) ) = −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1)\nẑt+1=zt+1\n[ log P̂ (ẑt+1 | xt, ut) ] .\nHere, P̂ (ẑt+1 | xt, ut) is intractable due to the marginalization of zt. We employ the same procedure as in (6) to construct a tractable variational bound\nR′′2 (P̂ ) ≤ −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 EQ(zt|ẑt+1,xt,ut)\n[ log\nP̂ (zt, ẑt+1 | xt, ut) Q(zt | ẑt+1, xt, ut)\n] .\nWe now make the further simplifying assumption that Q(ẑt+1 | xt+1) = P̂ (ẑt+1 | xt+1). This allows us to rewrite the expression as\nR′′2 (P̂ ) ≤ −H(Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut)\n[ log P̂ (ẑt+1 | zt, ut) ] + EQ(ẑt+1|xt+1) [ DKL(Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt)) ] = R′′2,Bound(P̂ , Q),\n(7)\nwhich is a subset of the terms in (6). See Appendix C.2 for a detailed derivation." }, { "heading": "4.2 CURVATURE REGULARIZATION AND AMORTIZED GRADIENT", "text": "In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated at z̄ = z + z and ū = u+ u,\nRLLC(P̂ ) = E ∼N (0,δI)[‖fZ(z̄, ū)− (∇zfZ(z̄, ū) z +∇ufZ(z̄, ū) u)− fZ(z, u)‖22]. (8)\nWhen nz is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss\nRLLC-Amor(P̂ , A,B) = E ∼N (0,δI)[‖fZ(z̄, ū)− (A(z̄, ū) z +B(z̄, ū) u − fZ(z, u))‖22]. (9)\nwhere A and B are function approximators to be optimized. Intuitively, the amortized curvature loss seeks—for any given (z, u)—to find the best choice of linear approximation induced by A(z, u) and B(z, u) such that the behavior of Fµ in the neighborhood of (z, u) is approximately linear." }, { "heading": "5 RELATION TO PREVIOUS EMBED-TO-CONTROL APPROACHES", "text": "In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC’s use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed “locally-linear dynamics” of the form z ′ = A(z̄, ū)z +B(z̄, ū)u+ c(z̄, ū) where z̄ and ū are auxiliary random variables meant to be perturbations of z and u. When contrasted with (9), it is clear that neither A and B in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR.\nFurthermore, RCE does not incorporate PCC’s consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will result in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrapancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC’s consistency loss. However, as noted by the authors of RCE, E2C’s maximization of pair-marginal log-likelihoods of (xt, xt+1) as opposed to the conditional likelihood of xt+1 given xt means that E2C does not properly minimize the prediction loss prescribed by the PCC framework." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we compare the performance of PCC with two model-based control algorithm baselines: RCE7 (Banijamali et al., 2018) and E2C (Watter et al., 2015), as well as running a thorough ablation study on various components of PCC. The experiments are based on the following continuous control benchmark domains (see Appendix D for more descriptions): (i) Planar System, (ii) Inverted Pendulum, (iii) Cartpole, (iv) 3-link manipulator, and (v) TORCS simulator8 (Wymann et al., 2000).\nTo generate our training and test sets, each consists of triples (xt, ut, xt+1), we: (1) sample an underlying state st and generate its corresponding observation xt, (2) sample an action ut, and (3) obtain the next state st+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ2Ins , and generate corresponding observation xt+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (st, ut) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models’ performance under various degree of noise.\nEach task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely, c̄(zt, ut) = (zt − zgoal)>Q(zt − zgoal) + u>t Rut, and c̄(zT ) = (zT − zgoal)>Q(zT − zgoal), where zgoal is obtained by encoding the goal observation, and Q = κ · Inz , R = Inu9. Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region10.\nA Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we (1) train 10 models\n7For the RCE implementation, we directly optimize the ELBO loss in Equation (16) of the paper. We also tried the approach reported in the paper on increasing the weights of the two middle terms and then annealing them to 1. However, in practice this method is sensitive to annealing schedule and has convergence issues.\n8See a control demo on the TORCS simulator at https://youtu.be/GBrgALRZ2fw 9According to the definition of latent cost c̄(z, u) = D ◦ c(z, u), its quadratic approximation is given by\nc̄(z, u) ≈ [ z − zgoal\nu ]> [∇z ∇u ] D◦c|z=zgoal,u=0 + 1 2 [ z − zgoal u ]> [∇2zz ∇2zu ∇2uz ∇2uu ] D◦c|z=zgoal,u=0 [ z − zgoal u ] .\nYet for simplicity, we choose the same latent cost as in RCE and E2C with fixed, tunable matrices Q and R. 10Another possible metric is the average distance to goal, which has a similar behavior.\nindependently, and (2) solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of 10× 10 = 100 control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported.\nResults Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with 64% improvement over RCE, and 2% improvement over E2C.11\nAblation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical results, one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical results in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of 6%, 9%, and 15% for Planar System, Inverted Pendulum, and Cartpole, respectively)." }, { "heading": "7 CONCLUSION", "text": "In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and\n11Comparison jobs were deployed on the Planar system using Nvidia TITAN Xp GPU.\nthe embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods." }, { "heading": "A TECHNICAL PROOFS OF SECTION 3", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1", "text": "Following analogous derivations of Lemma 11 in Petrik et al. (2016) (for the case of finite-horizon MDPs), for the case of finite-horizon MDPs, one has the following chain of inequalities for any given control sequence {ut}T−1t=0 and initial observation x0:\n|L(U, P̂ , x0)− L(U,P, x0)|\n= ∣∣∣∣∣E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) | P̂ , x0 ] − E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) |P, x0 ]∣∣∣∣∣ ≤T 2 · cmax E [ 1\nT T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0\n]\n≤ √ 2T 2 · cmax E\n[ 1\nT T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0\n]\n≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ,\nwhere DTV is the total variation distance of two distributions. The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker’s inequality (Ordentlich & Weinberger, 2005), and the third inequality is based on Jensen’s inequality (Boyd & Vandenberghe, 2004) of √ (·) function.\nNow consider the expected cumulative KL cost: E [\n1 T ∑T−1 t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] with respect to some arbitrary control action sequence {ut}T−1t=0 . Notice that this arbitrary action sequence can always be expressed in form of deterministic policy ut = π′(xt, t) with some nonstationary state-action mapping π′. Therefore, this KL cost can be written as:\nE\n[ 1\nT T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, π, x0\n]\n=E\n[ 1\nT T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut))dπ′(ut|xt, t) | P, x0 ]\n=E\n[ 1\nT T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut)) · dπ′(ut|xt, t) dU(ut) · dU(ut) | P, x0 ] ≤U · Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] ,\n(10)\nwhere the expectation is taken over the state-action occupation measure 1T ∑T−1 t=0 P(xt = x, ut = u|x0, U) of the finite-horizon problem that is induced by data-sampling policy U . The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) π is a deterministic policy, (ii) dU(ut) is a sampling policy with lebesgue measure 1/U over all control actions, (iii) the following bounds for importance sampling factor holds:\n∣∣∣dπ′(ut|xt,t)dU(ut) ∣∣∣ ≤ U . To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P̂ and control sequence U :\n|L(U, P̂ , x0)− L(U,P, x0)| ≤ √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] . (11)\nFor the second part of the proof, consider the solution of (SOC3), namely (U∗3 , P̂ ∗ 3 ). Using the optimality condition of this problem one obtains the following inequality:\nL(U∗3 , P̂ ∗ 3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≤L(U∗1 , P̂ ∗3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] .\n(12)\nUsing the results in (11) and (12), one can then show the following chain of inequalities:\nL(U∗1 , P, c, x0) ≥L(U∗1 , P̂ ∗3 , c, x0)− √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] =L(U∗1 , P̂ ∗ 3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u))\n] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u))\n] ≥L(U∗3 , P̂ ∗3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u))\n] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u))\n] ≥L(U∗3 , P, c, x0)− 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ,\n(13)\nwhere U∗1 is the optimizer of (SOC1) and (U ∗ 3 , P̂ ∗ 3 ) is the optimizer of (SOC3). Therefore by letting λ3 = √ 2T 2 · cmaxU and R3(P̂ ) = Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] and by\ncombining all of the above arguments, the proof of the above lemma is completed." }, { "heading": "A.2 PROOF OF LEMMA 2", "text": "For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 , and any model P̂ , consider the following decomposition of the expected cost :\nE[c(xt, ut) | P̂ ,x0] = ∫ x0:t−1∈X t t−1∏ k=1\ndP̂ (xk|xk−1, uk−1)·∫ zt∈Z ∫ z′t−1∈Z\ndE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1)\n∫ xt∈X\ndD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) .\nNow consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as\nE[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0]\n= ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1∈Z\ndF (zt−1|z′t−2, ut−2)( c̄(zt−1, ut−1) +\n∫ xt−1∈X dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) )\n≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ zt−2∈Z\ndE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) )\n+ cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) )\nBy continuing the above expansion, one can show that∣∣∣E [L(U,F, c, z0) | E, x0]− L(U, P̂ , c, x0)∣∣∣ ≤T 2 · cmax E [ 1\nT T−1∑ t=0 DTV((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0\n]\n≤ √ 2T 2 · cmax E\n[ 1\nT T−1∑ t=0 √ KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0\n]\n≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ,\nwhere the last inequality is based on Jensen’s inequality of √\n(·) function. For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2):\nL(U∗3 , P̂ ∗ 3 , c, x0)\n≥E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0]− √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] =E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u))\n] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u))\n] ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u))\n] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u))\n] ≥L(U∗2 , P̂ ∗2 , c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸\nλ2\n· √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] ︸ ︷︷ ︸\nR′′2 (P̂ ∗ 2 )\n,\n(14) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof." }, { "heading": "A.3 PROOF OF COROLLARY 1", "text": "To start with, the total-variation distance DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) can be bounded by the following inequality using triangle inequality:\nDTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) )\n≤DTV (∫\nx′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)\n) +DTV (∫ x′∈X dP (x′|x, u)E(·|x′)|| ∫ x′∈X dP̂ (x′|x, u)E(·|x′) )\n≤DTV (∫\nx′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)\n) +DTV ( P (·|x, u)||P̂ (·|x, u) ) where the second inequality follows from the convexity property of the DTV-norm (w.r.t. convex weights E(·|x′), ∀x′). Then by Pinsker’s inequality, one obtains the following inequality:\nDTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) )\n≤ √ 2KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) + √ 2KL ( P (·|x, u)||P̂ (·|x, u) ) . (15)\nWe now analyze the batch consistency regularizer:\nR′′2 (P̂ ) = Ex,u,x′ [KL(E(·|x′)||(F ◦ E)(·|x, u))]\nand connect it with the inequality in (15). Using Jensen’s inequality of convex function x log x, for any observation-action pair (x, u) sampled from Uτ , one can show that∫\nx′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′)) . (16)\nTherefore, for any observation-control pair (x, u) the following inequality holds:\nKL (∫\nx′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) =\n∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) − ∫ x′∈X dP (x′|x, u) log (g(x′|x, u))\n≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′))− ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) =KL(E(·|x′)||(F ◦ E)(·|x, u))\n(17)\nBy taking expectation over (x, u) one can show that\nEx,u [ KL( ∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)) ]\nis the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that\nDTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2 √ R′′2 (P̂ ) +R3(P̂ ). (18)\nThe inequality is based on the property that √ a+ √ b ≤ √ 2 √ a+ b.\nEquipped with the above additional results, the rest of the proof on the performance bound follows directly from the results from Lemma 2, in which here we further upper-bound DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) , when P̂ = P̂ ∗2 ." }, { "heading": "A.4 PROOF OF LEMMA 3", "text": "For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 and for any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P, x0] = cmax · ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−1, ut−1)||P̂ (·|xt−1, ut−1))\n+ ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1) ∫ zt∈Z ∫ z′t−1∈Z\ndE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1)\n∫ xt∈X\ndD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) .\nNow consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P, x0]\n= ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1\ndF (zt−1|z′t−2, ut−2)·( c̄(zt−1, ut−1) +\n∫ xt−1 dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) )\n+ cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j))\n≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ zt−2∈Z\ndE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) )\n+ cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j))\n+ cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) .\nContinuing the above expansion, one can show that |E [L(U,F, c, z0) | E, x0]− L(U,P, x0)|\n≤T 2 · cmax E\n[ 1\nT T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) +DTV( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ]\n≤ √ 2T 2 · cmax E\n[ 1\nT T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ]\n≤ √ 2T 2 · cmax E\n[ 1\nT T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut))\n+ √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ]\n≤2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 3KL(P (·|xt, ut)||P̂ (·|xt, ut)) + 2KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] ,\nwhere the last inequality is based on the fact that √ a+ √ b ≤ √ 2 √ a+ b and is based on Jensen’s inequality of √\n(·) function. For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2):\nL(U∗1 , P, c, x0) ≥E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0]− √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 )\n=E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 )\n− 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 )\n≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 )\n− 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 )\n≥L(U∗2 , P, c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2\n· √\n2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ),\n(19)\nwhere the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof." }, { "heading": "A.5 PROOF OF LEMMA 4", "text": "A Recap of the Result: Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action pair {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t}T−1t=0 is the optimal trajectory of problem (SOC2). Then with probability 1 − η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0) − 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) .\nDiscussions of the effect of δ on LLC Performance: The result of this lemma shows that when the nominal state and actions are δ-close to the optimal trajectory of (SOC2), i.e., at each time step (zt,ut) is a sample from the Gaussian distribution centered at (z∗2,t, u ∗ 2,t) with standard deviation δ, then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss RLLC. To quantify the above condition, one can use Mahalanobis distance (De Maesschalck et al., 2000) to measure the distance of (zt,ut) to distribution N ((z∗2,t, u∗2,t), δ2I), i.e., we want to check for the condition:\n‖(zt,ut)− (z∗2,t, u∗2,t)‖ δ ≤ ′, ∀t,\nfor any arbitrary error tolerance ′ > 0. While we cannot verify the condition without knowing the optimal trajectory {(z∗2,t, u∗2,t)}T−1t=0 , the above condition still offers some insights in choosing the parameter δ based on the trade-off of designing nominal trajectory {(zt,ut)}T−1t=0 and optimizing RLLC. When δ is large, the low-curvature regularization imposed by the RLLC regularizer will cover a large portion of the state-action space. In the extreme case when δ →∞, RLLC can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss RLLC is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when δ is small the low-curvature regularization in RLLC only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance result will only hold when (zt,ut) happens to be close to (z∗2,t, u ∗ 2,t) at each time-step t ∈ {0, . . . , T − 1}.\nProof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., Σw = 0). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward.\nFirst, consider any arbitrary latent state-action pair (z, u), such that the corresponding nominal state-action pair (z,u) is constructed by z = z− δz, u = u− δu, where (δz, δu) is sampled from the Gaussian distribution N (0, δ2I). (The random vectors are denoted as (δz′, δu′)) By the two-tailed Bernstein’s inequality (Murphy, 2012), for any arbitrarily given η ∈ (0, 1] one has the following inequality with probability 1− η:\n|fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| ≤ √ 2 log(2/η) √ V(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)]\n+ ∣∣E(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)]∣∣\n≤(1 + √ 2 log(2/η)) ( E(δz′,δu′)∼N (0,δ2I) [ ‖fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)‖2 ]︸ ︷︷ ︸ RLLC(P̂ |z,u) )1/2 .\nThe second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step t ∈ {0, . . . , T −1} by the Lipschitz property of the immediate cost, the value function Vt(z) = minUt:T−1 E [ cT (zT ) + ∑T−1 τ=t cτ (zτ , uτ ) | zt = z ] is also Lipchitz with constant (T − t+ 1)clip. Using the Lipschitz property of Vt+1, for any (z, u)\nand (δz, δu), such that (z,u) = (z − δz, u− δu), one has the following property: |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z, u))| ≤(T − t)clip · |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| ,\n(20)\nTherefore, at any arbitrary state-action pair (z̃, ũ), for z = z − δz, and u = ũ− δu with Gaussian sample (δz, δu) ∼ N (0, δ2I), the following inequality on the value function holds w.p. 1− η:\nVt+1(fZ(z̃, ũ)) ≥ Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ),\nwhich further implies\nct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) ≥ct(z̃, ũ) + Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ),\nNow let ũ∗ be the optimal control w.r.t. Bellman operator Tt[Vt+1](z̃) at any latent state z̃. Based on the assumption of this lemma, at each state z̃ the nominal latent state-action pair (z,u) is generated by perturbing (z̃, ũ∗) with Gaussian sample (δz, δu) ∼ N (0, δ2I) that is in form of z = z̃ − δz, u = ũ− δu. Then by the above arguments the following chain of inequalities holds w.p. 1− η:\nTt[Vt+1](z̃) := min ũ ct(z̃, ũ) + Vt+1(fZ(z̃, ũ))\n=ct(z̃, ũ ∗) + Vt+1(fZ(z̃, ũ ∗)) ≥ct(z̃, ũ∗) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z̃, ũ∗))|\n≥ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u)\n≥min δu ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu)\n− (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u)\n(21)\nRecall the LLC loss function is given by RLLC(P̂ ) = Ex,u [ E [ RLLC(P̂ |z, u) | z ] | E ] .\nAlso consider the Bellman operator w.r.t. latent SOC: Tt[V ](z) = minu ct(z, u) + V (fZ(z, u)), and the Bellman operator w.r.t. LLC: Tt,LLC[V ](z) = minδu ct(z, δu+ u) + V (fZ(z,u) +A(z,u)δz + B(z,u)δu). Utilizing these definitions, the inequality in (21) can be further expressed as\nTt[Vt+1](z̃) ≥Tt,LLC[Vt+1](z̃)− (T − t)clipcmax(1 + √ 2 log(2/η)) √ UX √ RLLC(P̂ ), (22)\nThis inequality is due to the fact that all latent states are generated by the encoding observations, i.e., z ∼ E(·|x), and thus by following analogous arguments as in the proof of Lemma 1, one has\nmax z,u\nRLLC(P̂ |z, u) ≤ UXEx,u [ E [ RLLC(P̂ |z, u) | z ] | E ] = UXRLLC(P̂ ).\nTherefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p. 1− η:\nmin U,P̂ L(U,F, c, z0)\n≥L(U∗LLC, P̂ ∗LLC, c, z0)− T−1∑ t=1 (T − t) · clipcmax · T · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC)\n≥L(U∗LLC, P̂ ∗LLC, c, z0)− T 2 · clipcmax · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC).\n(23)\nNotice that here we replace η in the result in (22) with η/T . In order to prove (23), we utilize (22) for each t ∈ {0, . . . , T − 1}, and this replacement is the result of applying the Union Probability bound (Murphy, 2012) (to ensure (23) holds with probability 1− η). Therefore the proof is completed by combining the above result with that in Lemma 3." }, { "heading": "B THE LATENT SPACE ILQR ALGORITHM", "text": "" }, { "heading": "B.1 PLANNING IN THE LATENT SPACE (HIGH-LEVEL DESCRIPTION)", "text": "We follow the same control scheme as in Banijamali et al. (2018). Namely, we use the iLQR (Li & Todorov, 2004) solver to plan in the latent space. Given a start observation xstart and a goal observation xgoal, corresponding to underlying states {sstart, sgoal}, we encode the observations to retrieve zstart and zgoal. Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon." }, { "heading": "B.2 DETAILS ABOUT ILQR IN THE LATENT SPACE", "text": "Consider the latent state SOC problem\nmin U\nE [ cT (zT ) +\nT−1∑ t=0 ct(zt, ut) | z0\n] .\nAt each time instance t ∈ {0, . . . , T} the value function of this problem is given by\nVT (z) = cT (z), Vt(z) = min Ut:T−1\nE [ cT (zT ) +\nT−1∑ τ=t cτ (zτ , uτ ) | zt = z\n] , ∀t < T. (24)\nRecall that the nonlinear latent space dynamics model is given by:\nzt+1 = F (zt, ut, wt) := Fµ(zt, ut) + Fσ · wt, wt ∼ N (0, I), ∀t ≥ 0, (25)\nwhere Fµ(zt, ut) is the deterministic dynamics model and F>σ Fσ is the covariance of the latent dynamics system noise. Notice that the deterministic dynamics model Fµ(zt, ut) is smooth, and therefore the following Jacobian terms are well-posed:\nA(z, u) := ∂Fµ(z, u)\n∂z , B(z, u) :=\n∂Fµ(z, u)\n∂u , ∀z ∈ Rnz , ∀u ∈ Rnu .\nBy the Bellman’s principle of optimality, at each time instance t ∈ {0, . . . , T − 1} the value function is a solution of the recursive fixed point equation\nVt(z) = min u Qt(z, u), (26)\nwhere the state-action value function at time-instance t w.r.t. state-action pair (zt, ut) = (z, u) is given by\nQt(z, u) = ct(z, u) + Ewt [Vt+1(F (zt, ut, wt)) | zt = z, ut = u] .\nIn the setting of the iLQR algorithm, assume we have access to a trajectory of latent states and actions that is in form of {(zt,ut, zt+1)}T−1t=0 . At each iteration, the iLQR algorithm has the following steps:\n1. Given a nominal trajectory, find an optimal policy w.r.t. the perturbed latent states 2. Generate a sequence of optimal perturbed actions that locally improves the cumulative cost\nof the given trajectory 3. Apply the above sequence of actions to the environment and update the nominal trajectory 4. Repeat the above steps with new nominal trajectory\nDenote by δzt = zt − zt and δut = ut − ut the deviations of state and control action at time step t respectively. Assuming that the nominal next state zt+1 is generated by the deterministic transition Fµ(zt,ut) at the nominal state and action pair (zt,ut), the first-order Taylor series approximation of the latent space transition is given by\nδzt+1 := zt+1−zt+1 = A(zt,ut)δzt+B(zt,ut)δut+Fσ ·wt+O(‖(δzt, δut)‖2), wt ∼ N (0, I). (27)\nTo find a locally optimal control action sequence u∗t = π ∗ δz,t(δzt) + ut, ∀t, that improves the cumulative cost of the trajectory, we compute the locally optimal perturbed policy (policy w.r.t. perturbed latent state) {π∗δz,t(δzt)} T−1 t=0 that minimizes the following second-order Taylor series approximation of Qt around nominal state-action pair (zt,ut), ∀t ∈ {0, . . . , T − 1}:\nQt(zt, ut) = Qt(zt,ut)+ 1\n2 [ 1 δzt δut ]> F>σ Fσ Qt,z(zt,ut)> Qt,u(zt,ut)>Qt,z(zt,ut) Qt,zz(zt,ut) Qt,uz(zt,ut)> Qt,u(zt,ut) Qt,uz(zt,ut) Qt,uu(zt,ut) [ 1δzt δut ] , (28)\nwhere the first and second order derivatives of the Q−function are given by\nQt,z(zt,ut) =\n[ ∂ct(zt,ut)\n∂z +A(zt,ut)\n>Vt+1,z(zt,ut) ] ,\nQt,u(zt,ut) =\n[ ∂ct(zt,ut)\n∂u +B(zt,ut)\n>Vt+1,z(zt,ut) ] ,\nQt,zz(zt,ut) =\n[ ∂2ct(zt,ut)\n∂z2 +A(zt,ut)\n>Vt+1,zz(zt,ut)A(zt,ut) ] ,\nQt,uz(zt,ut) =\n[ ∂2ct(zt,ut)\n∂u∂z +B(zt,ut)\n>Vt+1,zz(zt,ut)A(zt,ut) ] ,\nQt,uu(zt,ut) =\n[ ∂2ct(zt,ut)\n∂u2 +B(zt,ut)\n>Vt+1,zz(zt,ut)B(zt,ut) ] ,\nand the first and second order derivatives of the value functions are given by Vt+1,z(zt,ut) = Ew [ ∂Vt+1 ∂z (F (zt,ut, w)) ] , Vt+1,zz(zt,ut) = Ew [ ∂2Vt+1 ∂z2 (F (zt,ut, w)) ] . Notice that the Q-function approximation Qt in (28) is quadratic and the matrix[ Qt,zz(zt,ut) Qt,uz(zt,ut) >\nQt,uz(zt,ut) Qt,uu(zt,ut)\n] is positive semi-definite. Therefore the optimal perturbed policy\nπ∗δz,t has the following closed-form solution:\nπ∗δz,t(·) ∈ arg min δut Qt(zt, ut) =⇒ π∗δz,t(δzt) = kt(zt,ut) +Kt(zt,ut)δzt, (29)\nwhere the controller weights are given by\nkt(zt,ut) = − (Qt,uu(zt,ut))−1Qt,u(zt,ut) and Kt(zt,ut) = − (Qt,uu(zt,ut))−1Qt,uz(zt,ut). Furthermore, by putting the optimal solution into the Taylor expansion of the Q-function Qt, we get\nQt(zt, ut)−Qt(zt,ut) = 1\n2 [ 1 δzt ]> [ Q∗t,11(zt,ut) ( Q∗t,21(zt,ut) )> Q∗t,21(zt,ut) Q ∗ t,22(zt,ut) ] [ 1 δzt ] ,\nwhere the closed-loop first and second order approximations of the Q-function are given by\nQ∗t,11(zt,ut) = C > wCw −Qt,u(zt,ut)>Qt,uu(zt,ut)−1Qt,u(zt,ut),\nQ∗t,21(zt,ut) = Qt,z(zt,ut) > − kt(zt,ut)>Qt,uu(zt,ut)Kt(zt,ut),\nQ∗t,22(zt,ut) = Qt,zz(zt,ut)−Kt(zt,ut)>Qt,uu(zt,ut)Kt(zt,ut). Notice that at time step t the optimal value function also has the following form: Vt(zt) = min\nδut Qt(zt, ut) =δQt(zt,ut, δzt, π\n∗ δz,t(δzt)) +Qt(zt,ut)\n= 1\n2 [ 1 δzt ]> [ Q∗t,11(zt,ut) ( Q∗t,21(zt,ut) )> Q∗t,21(zt,ut) Q ∗ t,22(zt,ut) ] [ 1 δzt ] +Qt(zt,ut).\n(30)\nTherefore, the first and second order differential value functions can be Vt,z(zt,ut) = Q∗t,21(zt,ut), Vt,zz(zt,ut) = Q∗t,22(zt,ut),\nand the value improvement at the nominal state zt at time step t is given by\nVt(zt) = 1\n2 Q∗t,11(zt,ut) +Qt(zt,ut).\nB.3 INCORPORATING RECEDING-HORIZON TO ILQR\nWhile iLQR provides an effective way of computing a sequence of (locally) optimal actions, it has two limitations. First, unlike RL in which an optimal Markov policy is computed, this algorithm only finds a sequence of open-loop optimal control actions under a given initial observation. Second, the iLQR algorithm requires the knowledge of a nominal (latent state and action) trajectory at every iteration, which restricts its application to cases only when real-time interactions with environment are possible. In order to extend the iLQR paradigm into the closed-loop RL setting, we utilize the concept of model predictive control (MPC) (Rawlings & Mayne, 2009; Borrelli et al., 2017) and propose the following iLQR-MPC procedure. Initially, given an initial latent state z0 we generate a single nominal trajectory: {(zt,ut, zt+1)}T−1t=0 , whose sequence of actions is randomly sampled, and the latent states are updated by forward propagation of latent state dynamics (instead of interacting with environment), i.e., z0 = z0, zt+1 ∼ F (zt,ut, wt), ∀t. Then at each time-step k ≥ 0, starting at latent state zk we compute the optimal perturbed policy {π∗δz,t(·)} T−1 t=k using the iLQR algorithm with T − k lookahead steps. Having access to the perturbed latent state δzk = zk − zk, we only deploy the first action u∗k = π ∗ δz,k(δzk) + uk in the environment and observe the next latent state zk+1. Then, using the subsequent optimal perturbed policy {π∗δz,t(·)} T−1 t=k+1, we generate both the estimated latent state sequence {ẑt}Tt=k+1 by forward propagation with initial state ẑk+1 = zk+1 and action sequence {u∗t }T−1t=k+1, where u∗t = π∗δz,t(δẑt) + ut, and δẑt = ẑt − zt. Then one updates the subsequent nominal trajectory as follows: {(zt,ut, zt+1)}T−1t=k+1 = {(ẑt, u∗t , ẑt+1)} T−1 t=k+1, and repeats the above procedure.\nConsider the finite-horizon MDP problem minπt,∀t E [ cT (xT ) + ∑T−1 t=0 ct(xt, ut) | πt, P, x0 ] ,\nwhere the optimizer π is over the class of Markovian policies. (Notice this problem is the closed-loop version of (SOC1).) Using the above iLQR-MPC procedure, at each time step t ∈ {0, . . . , T − 1} one can construct a Markov policy that is in form of\nπt,iLQR-MPC(·|xt) := uiLQRt s.t. {u iLQR t , · · · , u iLQR T−1} ← iLQR(L(U, P̃ ∗, c, zt)), with zt ∼ E(·|xt),\nwhere iLQR(`Control(U ; P̃ ∗, z)) denotes the iLQR algorithm with initial latent state z. To understand the performance of this policy w.r.t. the MDP problem, we refer to the sub-optimality bound of iLQR (w.r.t. open-loop control problem in (SOC1)) in Section 3.3, as well as that for MPC policy, whose details can be found in Borrelli et al. (2017)." }, { "heading": "C TECHNICAL PROOFS OF SECTION 4", "text": "" }, { "heading": "C.1 DERIVATION OF R′3,NLE-BOUND(P̂ , Q) DECOMPOSITION", "text": "We derive the bound for the conditional log-likelihood log P̂ (xt+1|xt, ut).\nlog P̂ (xt+1|xt, ut) = log ∫ zt,ẑt+1 P̂ (xt+1, zt, ẑt+1|xt, ut)dztdẑt+1\n= logEQ(zt,ẑt+1|xt,xt+1,ut) [ P̂ (xt+1, zt, ẑt+1|xt, ut) Q(zt, ẑt+1|xt, xt+1, ut) ] (a) ≥ EQ(zt,ẑt+1|xt,xt+1,ut) [ log\nP̂ (xt+1, zt, ẑt+1|xt, ut) Q(zt, ẑt+1|xt, xt+1, ut) ] (b) = E Q(ẑt+1|xt+1)\nQ(zt|ẑt+1,xt,ut)\n[ log\nP̂ (zt|xt)P̂ (ẑt+1|zt, ut)P̂ (xt+1|ẑt+1) Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut) ] (c) = EQ(ẑt+1|xt+1) [ log P̂ (xt+1|ẑt+1)\n] − EQ(ẑt+1|xt+1) [ DKL ( Q(zt|ẑt+1, xt, ut)||P̂ (zt|xt)\n)] +H (Q(ẑt+1|xt+1))\n+ E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut)\n[ log P̂ (ẑt+1|zt, ut) ] = R′3,NLE-Bound(P̂ , Q)\nWhere (a) holds from the log function concavity, (b) holds by the factorization Q(zt, ẑt+1|xt, xt+1, ut) = Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut), and (c) holds by a simple decomposition to the different components." }, { "heading": "C.2 DERIVATION OF R′′2,BOUND(P̂ , Q) DECOMPOSITION", "text": "We derive the bound for the consistency loss `Consistency(P̂ ).\nR′′2 (P̂ ) = DKL ( P̂ (zt+1|xt+1)‖ ∫ zt P̂ (ẑt+1|zt, ut)P̂ (zt|xt)dzt )\n(a) = −H (Q(ẑt+1|xt+1))− EQ(ẑt+1|xt+1)\n[ log ∫ zt P̂ (ẑt+1|zt, ut)P̂ (zt|xt)dzt ]\n= −H (Q(ẑt+1|xt+1))− EQ(ẑt+1|xt+1) [ logEQ(zt|ẑt+1,xt,ut) [ P̂ (ẑt+1|zt, ut)P̂ (zt|xt) Q(zt|ẑt+1, xt, ut) ]] (b) ≤ −H (Q(ẑt+1|xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1|zt, ut)P̂ (zt, xt) Q(zt|ẑt+1, xt, ut)\n] (c) = − ( − EQ(ẑt+1|xt+1) [ DKL ( Q(zt|ẑt+1, xt, ut)||P̂ (zt|xt)\n)] +H (Q(ẑt+1|xt+1)) + E Q(ẑt+1|xt+1)\nQ(zt|ẑt+1,xt,ut)\n[ log P̂ (ẑt+1|zt, ut) ]) = R′′2,Bound(P̂ , Q)\nWhere (a) holds by the assumption that Q(ẑt+1 | xt+1) = P̂ (zt+1 | xt+1), (b) holds from the log function concavity, and (c) holds by a simple decomposition to the different components." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "In the following sections we will provide the description of the data collection process, domains, and implementation details used in the experiments." }, { "heading": "D.1 DATA COLLECTION PROCESS", "text": "To generate our training and test sets, each consists of triples (xt, ut, xt+1), we: (1) sample an underlying state st and generate its corresponding observation xt, (2) sample an action ut, and (3) obtain the next state st+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ2Ins , and generate it’s corresponding observation xt+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (st, ut) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models’ performance under various degree of noise." }, { "heading": "D.2 DESCRIPTION OF THE DOMAINS", "text": "Planar System In this task the main goal is to navigate an agent in a surrounded area on a 2D plane (Breivik & Fossen, 2005), whose goal is to navigate from a corner to the opposite one, while avoiding the six obstacles in this area. The system is observed through a set of 40 × 40 pixel images taken from the top view, which specifies the agent’s location in the area. Actions are two-dimensional and specify the x− y direction of the agent’s movement, and given these actions the next position of the agent is generated by a deterministic underlying (unobservable) state evolution function. Start State: one of three corners (excluding bottom-right). Goal State: bottom-right corner. Agent’s Objective: agent is within Euclidean distance of 2 from the goal state.\nInverted Pendulum — SwingUp & Balance This is the classic problem of controlling an inverted pendulum (Furuta et al., 1991) from 48× 48 pixel images. The goal of this task is to swing up an under-actuated pendulum from the downward resting position (pendulum hanging down) to the top position and to balance it. The underlying state st of the system has two dimensions: angle and angular velocity, which is unobservable. The control (action) is 1-dimensional, which is the torque applied to the joint of the pendulum. To keep the Markovian property in the observation (image) space, similar to the setting in E2C and RCE, each observation xt contains two images generated from consecutive time-frames (from current time and previous time). This is because each image only shows the position of the pendulum and does not contain any information about the velocity. Start State: Pole is resting down (SwingUp), or randomly sampled in ±π/6 (Balance). Agent’s Objective: pole’s angle is within ±π/6 from an upright position.\nCartPole This is the visual version of the classic task of controlling a cart-pole system (Geva & Sitte, 1993). The goal in this task is to balance a pole on a moving cart, while the cart avoids hitting the left and right boundaries. The control (action) is 1-dimensional, which is the force applied to the cart. The underlying state of the system st is 4- dimensional, which indicates the angle and angular velocity of the pole, as well as the position and velocity of the cart. Similar to the inverted pendulum, in order to maintain the Markovian property the observation xt is a stack of two 80× 80 pixel images generated from consecutive time-frames. Start State: Pole is randomly sampled in ±π/6. Agent’s Objective: pole’s angle is within ±π/10 from an upright position.\n3-link Manipulator — SwingUp & Balance The goal in this task is to move a 3-link manipulator from the initial position (which is the downward resting position) to a final position (which is the top position) and balance it. In the 1-link case, this experiment is reduced to inverted pendulum. In the 2-link case the setup is similar to that of arcobot (Spong, 1995), except that we have torques applied to all intermediate joints, and in the 3-link case the setup is similar to that of the 3-link planar robot arm domain that was used in the E2C paper, except that the robotic arms are modeled by simple rectangular rods (instead of real images of robot arms), and our task success criterion requires both swing-up (manipulate to final position) and balance.12 The underlying (unobservable) state st of the system is 2N -dimensional, which indicates the relative angle and angular velocity at each link, and the actions are N -dimensional, representing the force applied to each joint of the arm. The\n12Unfortunately due to copyright issues, we cannot test our algorithms on the original 3-link planar robot arm domain.\nstate evolution is modeled by the standard Euler-Lagrange equations (Spong, 1995; Lai et al., 2015). Similar to the inverted pendulum and cartpole, in order to maintain the Markovian property, the observation state xt is a stack of two 80× 80 pixel images of the N -link manipulator generated from consecutive time-frames. In the experiments we will evaluate the models based on the case of N = 2 (2-link manipulator) and N = 3 (3-link manipulator). Start State: 1st pole with angle π, 2nd pole with angle 2π/3, and 3rd pole with angle π/3, where angle π is a resting position. Agent’s Objective: the sum of all poles’ angles is within ±π/6 from an upright position.\nTORCS Simulaotr This task takes place in the TORCS simulator (Wymann et al., 2000) (specifically in michegan f1 race track, only straight lane). The goal of this task is to control a car so it would remain in the middle of the lane. We restricted the task to only consider steering actions (left / right in the range of [−1, 1]), and applied a simple procedure to ensure the velocity of the car is always around 10. We pre-processed the observations given by the simulator (240× 320 RGB images) to receive 80× 80 binary images (white pixels represent the road). In order to maintain the Markovian property, the observation state xt is a stack of two 80 × 80 images (where the two images are 7 frames apart - chosen so that consecutive observation would be somewhat different). The task goes as follows: the car is forced to steer strongly left (action=1), or strongly right (action=-1) for the initial 20 steps of the simulation (direction chosen randomly), which causes it to drift away from the center of the lane. Then, for the remaining horizon of the task, the car needs to recover from the drift, return to the middle of the lane, and stay there. Start State: 20 steps of drifting from the middle of the lane by steering strongly left, or right (chosen randomly). Agent’s Objective: agent (car) is within Euclidean distance of 1 from the middle of the lane (full width of the lane is about 18).\nD.3 IMPLEMENTATION\nIn the following we describe architectures and hyper-parameters that were used for training the different algorithms." }, { "heading": "D.3.1 TRAINING HYPER-PARAMETERS AND REGULIZERS", "text": "All the algorithms were trained using:\n• Batch size of 128. • ADAM (Goodfellow et al., 2016) with α = 5 · 10−4, β1 = 0.9, β2 = 0.999, and = 10−8. • L2 regularization with a coefficient of 10−3. • Additional VAE (Kingma & Welling, 2013) loss term given by `VAEt = −Eq(z|x) [log p(x|z)] + DKL (q(z|x)‖p(z)), where p(z) ∼ N (0, 1). The term was added with a very small coefficient of 0.01. We found this term to be important to stabilize the training process, as there is no explicit term that governs the scale of the latent space.\nE2C training specifics:\n• λ from the loss term of E2C was tuned using a parameter sweep in {0.25, 0.5, 1}, and was chosen to be 0.25 across all domains, as it performed the best independently for each domain.\nPCC training specifics:\n• λp was set to 1 across all domains. • λc was set to be 7 across all domains, after it was tuned using a parameter sweep in {1, 3, 7, 10} on the Planar system. • λcur was set to be 1 across all domains without performing any tuning. • {z̄, ū}, for the curvature loss, were generated from {z, u} by adding Gaussian noise N (0, 0.12), where σ = 0.1 was set across all domains without performing any tuning. • Motivated by Hafner et al. (2018), we added a deterministic loss term in the form of cross\nentropy between the output of the generative path given the current observation and action (while taking the means of the encoder output and the dynamics model output) and the observation of the next state. This loss term was added with a coefficient of 0.3 across all domains after it was tuned using a parameter sweep over {0.1, 0.3, 0.5} on the Planar system." }, { "heading": "D.3.2 NETWORK ARCHITECTURES", "text": "We next present the specific architecture choices for each domain. For fair comparison, The numbers of layers and neurons of each component were shared across all algorithms. ReLU non-linearities were used between each two layers.\nEncoder: composed of a backbone (either a MLP or a CNN, depending on the domain) and an additional fully-connected layer that outputs mean variance vectors that induce a diagonal Gaussian distribution.\nDecoder: composed of a backbone (either a MLP or a CNN, depending on the domain) and an additional fully-connected layer that outputs logits that induce a Bernoulli distribution.\nDynamical model: the path that leads from {zt, ut} to ẑt+1. Composed of a MLP backbone and an additional fully-connected layer that outputs mean and variance vectors that induce a diagonal Gaussian distribution. We further added a skip connection from zt and summed it with the output of the mean vector. When using the amortized version, there are two additional outputs A and B.\nBackwards dynamical model: the path that leads from {ẑt+1, ut, xt} to zt. each of the inputs goes through a fully-connected layer with {Nz, Nu, Nx} neurons, respectively. The outputs are then concatenated and pass though another fully-connected layer with Njoint neurons, and finally with an additional fully-connected layer that outputs the mean and variance vectors that induce a diagonal Gaussian distribution." }, { "heading": "Planar system", "text": "• Input: 40× 40 images. 5000 training samples of the form (xt, ut, xt+1) • Actions space: 2-dimensional • Latent space: 2-dimensional • Encoder: 3 Layers: 300 units - 300 units - 4 units (2 for mean and 2 for variance) • Decoder: 3 Layers: 300 units - 300 units - 1600 units (logits) • Dynamics: 3 Layers: 20 units - 20 units - 4 units • Backwards dynamics: Nz = 5, Nu = 5, Nx = 100 - Njoint = 100 - 4 units • Number of control actions: or the planning horizon T = 40" }, { "heading": "Inverted Pendulum — Swing Up & Balance", "text": "• Input: Two 48× 48 images. 20000 training samples of the form (xt, ut, xt+1) • Actions space: 1-dimensional • Latent space: 3-dimensional • Encoder: 3 Layers: 500 units - 500 units - 6 units (3 for mean and 3 for variance) • Decoder: 3 Layers: 500 units - 500 units - 4608 units (logits) • Dynamics: 3 Layers: 30 units - 30 units - 6 units • Backwards dynamics: Nz = 10, Nu = 10, Nx = 200 - Njoint = 200 - 6 units • Number of control actions: or the planning horizon T = 400" }, { "heading": "Cart-pole Balancing", "text": "• Input: Two 80× 80 images. 15000 training samples of the form (xt, ut, xt+1) • Actions space: 1-dimensional • Latent space: 8-dimensional • Encoder: 6 Layers: Convolutional layer: 32 × 5 × 5; stride (1, 1) - Convolutional layer:\n32 × 5 × 5; stride (2, 2) - Convolutional layer: 32 × 5 × 5; stride (2, 2) - Convolutional layer: 10× 5× 5; stride (2, 2) - 200 units - 16 units (8 for mean and 8 for variance) • Decoder: 6 Layers: 200 units - 1000 units - 100 units - Convolutional layer: 32 × 5 × 5;\nstride (1, 1) - Upsampling (2, 2) - convolutional layer: 32×5×5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 32× 5× 5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 2× 5× 5; stride (1, 1)\n• Dynamics: 3 Layers: 40 units - 40 units - 16 units • Backwards dynamics: Nz = 10, Nu = 10, Nx = 300 - Njoint = 300 - 16 units • Number of control actions: or the planning horizon T = 200\n3-link Manipulator — Swing Up & Balance\n• Input: Two 80× 80 images. 30000 training samples of the form (xt, ut, xt+1) • Actions space: 3-dimensional • Latent space: 8-dimensional • Encoder: 6 Layers: Convolutional layer: 62 × 5 × 5; stride (1, 1) - Convolutional layer:\n32 × 5 × 5; stride (2, 2) - Convolutional layer: 32 × 5 × 5; stride (2, 2) - Convolutional layer: 10× 5× 5; stride (2, 2) - 500 units - 16 units (8 for mean and 8 for variance) • Decoder: 6 Layers: 500 units - 2560 units - 100 units - Convolutional layer: 32 × 5 × 5;\nstride (1, 1) - Upsampling (2, 2) - convolutional layer: 32×5×5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 32× 5× 5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 2× 5× 5; stride (1, 1) • Dynamics: 3 Layers: 40 units - 40 units - 16 units • Backwards dynamics: Nz = 10, Nu = 10, Nx = 400 - Njoint = 400 - 16 units • Number of control actions: or the planning horizon T = 400" }, { "heading": "TORCS", "text": "• Input: Two 80× 80 images. 30000 training samples of the form (xt, ut, xt+1) • Actions space: 1-dimensional • Latent space: 8-dimensional • Encoder: 6 Layers: Convolutional layer: 32 × 5 × 5; stride (1, 1) - Convolutional layer:\n32 × 5 × 5; stride (2, 2) - Convolutional layer: 32 × 5 × 5; stride (2, 2) - Convolutional layer: 10× 5× 5; stride (2, 2) - 200 units - 16 units (8 for mean and 8 for variance) • Decoder: 6 Layers: 200 units - 1000 units - 100 units - Convolutional layer: 32 × 5 × 5;\nstride (1, 1) - Upsampling (2, 2) - convolutional layer: 32×5×5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 32× 5× 5; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: 2× 5× 5; stride (1, 1) • Dynamics: 3 Layers: 40 units - 40 units - 16 units • Backwards dynamics: Nz = 10, Nu = 10, Nx = 300 - Njoint = 300 - 16 units • Number of control actions: or the planning horizon T = 200" }, { "heading": "E ADDITIONAL RESULTS", "text": "" }, { "heading": "E.1 PERFORMANCE ON NOISY DYNAMICS", "text": "Table 3 shows results for the noisy cases." }, { "heading": "E.2 LATENT SPACE REPRESENTATION FOR THE PLANAR SYSTEM", "text": "The following figures depicts 5 instances (randomly chosen from the 10 trained models) of the learned latent space representations for both the noiseless and the noisy planar system from PCC, RCE, and E2C models." } ]
2,020
null
SP:2656017dbf3c1e8b659857d3a44fdbb91e186237
[ "This paper proposes a neural network architecture to classify graph structure. A graph is specified using its adjacency matrix, and the authors prose to extract features by identifying temples, implemented as small kernels on sub matrices of the adjacency matrix. The main problem is how to handle isomorphism: there is no node order in a graph. The authors propose to test against all permutations of the kernel, and choose the permutation with minimal activation. Thus, the network can learn isomorphic features of the graph. This idea is used for binary graph classification on a number of tasks.", "This paper proposes a new neural network architecture for dealing with graphs dealing with the lack of order of the nodes. The first step called the graph isomorphic layer compute features invariant to the order of nodes by extracting sub-graphs and cosidering all possible permutation of these subgraphs. There is no training involved here as no parameter is learned. Indeed the only learning part is in the so-called classification component which is a (standard) fully connected layer. In my opinion, any classification algorithm could be used on the features extracted from the graphs." ]
Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the ‘node-orderless’ property. Normally, adjacency matrices will cast an artificial and random node-order on the graphs, which renders the performance of deep models on graph classification tasks extremely erratic, and the representations learned by such models lack clear interpretability. To eliminate the unnecessary nodeorder constraint, we propose a novel model named Isomorphic Neural Network (ISONN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. ISONN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. Three fully-connected layers are used as the classification component in ISONN. Extensive experiments are conducted on benchmark datasets, the experimental results can demonstrate the effectiveness of ISONN, especially compared with both classic and state-of-the-art graph classification methods.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Rami Al-Rfou", "Alexander A Alemi" ], "title": "Watch your step: Learning node embeddings via graph attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "James Atwood", "Don Towsley" ], "title": "Diffusion-convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yunsheng Bai", "Hao Ding", "Yizhou Sun", "Wei Wang" ], "title": "Convolutional set matching for graph similarity", "venue": "arXiv preprint arXiv:1810.10866,", "year": 2018 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Bokai Cao", "Xiangnan Kong", "Jingyuan Zhang", "S Yu Philip", "Ann B Ragin" ], "title": "Identifying hiv-induced subgraph patterns in brain networks with side information", "venue": "Brain informatics,", "year": 2015 }, { "authors": [ "Bokai Cao", "Liang Zhan", "Xiangnan Kong", "S Yu Philip", "Nathalie Vizueta", "Lori L Altshuler", "Alex D Leow" ], "title": "Identification of discriminative subgraph patterns in fmri brain networks in bipolar affective disorder", "venue": "In International Conference on Brain Informatics and Health,", "year": 2015 }, { "authors": [ "Mohammed Elseidy", "Ehab Abdelhamid", "Spiros Skiadopoulos", "Panos Kalnis" ], "title": "Grami: Frequent subgraph and pattern mining in a single large graph", "venue": "Proceedings of the VLDB Endowment,", "year": 2014 }, { "authors": [ "Qiang Gao", "Fan Zhou", "Kunpeng Zhang", "Goce Trajcevski", "Xucheng Luo", "Fengli Zhang" ], "title": "Identifying human mobility via trajectory embeddings", "venue": "In International Joint Conferences on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Benoit Gaüzere", "Luc Brun", "Didier Villemin" ], "title": "Two new graphs kernels in chemoinformatics", "venue": "Pattern Recognition Letters,", "year": 2012 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Steven Hill", "Bismita Srichandan", "Rajshekhar Sunderraman" ], "title": "An iterative mapreduce approach to frequent subgraph mining in biological datasets", "venue": "In Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine,", "year": 2012 }, { "authors": [ "Ning Jin", "Calvin Young", "Wei Wang" ], "title": "Graph classification based on pattern co-occurrence", "venue": "In Proceedings of the 18th ACM conference on Information and knowledge management,", "year": 2009 }, { "authors": [ "Hisashi Kashima", "Koji Tsuda", "Akihiro Inokuchi" ], "title": "Marginalized kernels between labeled graphs", "venue": "In Proceedings of the 20th international conference on machine learning", "year": 2003 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Xiangnan Kong", "Philip S Yu" ], "title": "Semi-supervised feature selection for graph classification", "venue": "In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2010 }, { "authors": [ "Xiangnan Kong", "Philip S Yu", "Xue Wang", "Ann B Ragin" ], "title": "Discriminative feature selection for uncertain graph classification", "venue": "In Proceedings of the 2013 SIAM International Conference on Data Mining,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yi-An Lai", "Chin-Chi Hsu", "Wen Hao Chen", "Mi-Yen Yeh", "Shou-De Lin" ], "title": "Prune: Preserving proximity and global ranking for network embedding", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "John Boaz Lee", "Ryan Rossi", "Xiangnan Kong" ], "title": "Graph classification using structural attention", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Wenqing Lin", "Xiaokui Xiao", "Gabriel Ghinita" ], "title": "Large-scale frequent subgraph mining in mapreduce", "venue": "In 2014 IEEE 30th International Conference on Data Engineering,", "year": 2014 }, { "authors": [ "Yankai Lin", "Zhiyuan Liu", "Maosong Sun", "Yang Liu", "Xuan Zhu" ], "title": "Learning entity and relation embeddings for knowledge graph completion", "venue": "In Twenty-ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "Jonathan Masci", "Davide Boscaini", "Michael Bronstein", "Pierre Vandergheynst" ], "title": "Geodesic convolutional neural networks on riemannian manifolds", "venue": "In Proceedings of the IEEE international conference on computer vision workshops,", "year": 2015 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Aida Mrzic", "Pieter Meysman", "Wout Bittremieux", "Pieter Moris", "Boris Cule", "Bart Goethals", "Kris Laukens" ], "title": "Grasping frequent subgraph mining for bioinformatics applications", "venue": "BioData mining,", "year": 2018 }, { "authors": [ "Annamalai Narayanan", "Mahinthan Chandramohan", "Rajasekar Venkatesan", "Lihui Chen", "Yang Liu", "Shantanu Jaiswal" ], "title": "graph2vec: Learning distributed representations of graphs", "venue": "arXiv preprint arXiv:1707.05005,", "year": 2017 }, { "authors": [ "Fengcai Qiao", "Xin Zhang", "Pei Li", "Zhaoyun Ding", "Shanshan Jia", "Hui Wang" ], "title": "A parallel approach for frequent subgraph mining in a single large graph using spark", "venue": "Applied Sciences,", "year": 2018 }, { "authors": [ "Hiroto Saigo", "Sebastian Nowozin", "Tadashi Kadowaki", "Taku Kudo", "Koji Tsuda" ], "title": "gboost: a mathematical programming approach to graph classification and regression", "venue": "Machine Learning,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan van Leeuwen", "Kurt Mehlhorn", "Karsten M Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Marisa Thoma", "Hong Cheng", "Arthur Gretton", "Jiawei Han", "Hans-Peter Kriegel", "Alex Smola", "Le Song", "Philip S Yu", "Xifeng Yan", "Karsten Borgwardt" ], "title": "Near-optimal supervised feature selection among frequent subgraphs", "venue": "In Proceedings of the 2009 SIAM International Conference on Data Mining,", "year": 2009 }, { "authors": [ "Tong Tong", "Katherine Gray", "Qinquan Gao", "Liang Chen", "Daniel Rueckert" ], "title": "Nonlinear graph fusion for multi-modal classification of alzheimer’s disease", "venue": "In International Workshop on Machine Learning in Medical Imaging,", "year": 2015 }, { "authors": [ "Tong Tong", "Katherine Gray", "Qinquan Gao", "Liang Chen", "Daniel Rueckert" ], "title": "Alzheimer’s Disease Neuroimaging Initiative, et al. Multi-modal classification of alzheimer’s disease using nonlinear graph fusion", "venue": "Pattern recognition,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Shen Wang", "Lifang He", "Bokai Cao", "Chun-Ta Lu", "Philip S Yu", "Ann B Ragin" ], "title": "Structural deep brain network mining", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Jia Wu", "Shirui Pan", "Xingquan Zhu", "Zhihua Cai" ], "title": "Boosting for multi-graph classification", "venue": "IEEE transactions on cybernetics,", "year": 2014 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Xifeng Yan", "Jiawei Han" ], "title": "gspan: Graph-based substructure pattern mining", "venue": "IEEE International Conference on Data Mining,", "year": 2002 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The graph structure is attracting increasing interests because of its great representation power on various types of data. Researchers have done many analyses based on different types of graphs, such as social networks, brain networks and biological networks. In this paper, we will focus on the binary graph classification problem, which has extensive applications in the real world. For example, one may wish to identify the social community categories according to the users’ social interactions (Gao et al., 2017), distinguish the brain states of patients via their brain networks (Wang et al., 2017), and classify the functions of proteins in a biological interaction network (Hamilton et al., 2017).\nTo address the graph classification task, many approaches have been proposed. One way to estimate the usefulness of subgraph features is feature evaluation criteria based on both labeled and unlabeled graphs (Kong & Yu, 2010). Some other works also proposed to design a pattern exploration approach based on pattern co-occurrence and build the classification model (Jin et al., 2009) or develop a boosting algorithm (Wu et al., 2014). However, such works based on BFS or DFS cannot avoid computing a large quantity of possible subgraphs, which causes high computational complexity though the explicit subgraphs are maintained. Recently, deep learning models are also widely used to solve the graph-oriented problems. Although some deep models like MPNN (Gilmer et al., 2017) and GCN (Kipf & Welling, 2016) learn implicit structural features, the explict structural information cannot be maintained for further research. Besides, most existing works on graph classification use the aggregation of the node features in graphs as the graph representation (Xu et al., 2018; Hamilton et al., 2017), but simply doing aggregation on the whole graph cannot capture the substructure precisely. While there are other models can capture the subgraphs, they often need more complex computation and mechanism (Wang et al., 2017; Narayanan et al., 2017) or need additonal node labels to find the subgraph strcuture (Gaüzere et al., 2012; Shervashidze et al., 2011).\nHowever, we should notice that when we deal with the graph-structured data, different node-orders will result in very different adjacency matrix representations for most existing deep models which take the adjacency matrices as input if there is no other information on graph. Therefore, compared with the original graph, matrix naturally poses a redundant constraint on the graph node-order. Such a node-order is usually unnecessary and manually defined. The different graph matrix representations brought by the node-order differences may render the learning performance of the existing models to be extremely erratic and not robust. Formally, we summarize the encountered challenges in the graph classification problem as follows:\n• Explicit useful subgraph extraction. The existing works have proposed many discriminative models to discover useful subgraphs for graph classification, and most of them require manual efforts. Nevertheless, how to select the contributing subgraphs automatically without any additional manual involvement is a challenging problem.\n• Graph representation learning. Representing graphs in the vector space is an important task since it facilitates the storage, parallelism and the usage of machine learning models for the graph data. Extensive works have been done on node representations (Grover & Leskovec, 2016; Lin et al., 2015; Lai et al., 2017; Hamilton et al., 2017), whereas learning the representation of the whole graph with clear interpretability is still an open problem requiring more explorations.\n• Node-order elimination for subgraphs. Nodes in graphs are orderless, whereas the matrix representations of graphs cast an unnecessary order on nodes, which also renders the features extracted with the existing learning models, e.g., CNN, to be useless for the graphs. For subgraphs, this problem also exists. Thus, how to break such a node-order constraint for subgraphs is challenging.\n• Efficient matching for large subgraphs. To break the node-order, we will try all possible node permutations to find the best permutation for a subgraph. Clearly, trying all possible permutaions is a combinatorial explosion problem, which is extremly time-comsuming for finding large subgraph templates. The problem shows that how to accelerate the proposed model for large subgraphs also needs to be solved.\nIn this paper, we propose a novel model, namely Isomorphic Neural Network (ISONN) and its variants, to address the aforementioned challenges in the graph representation learning and classification problem. ISONN is composed of two components: the graph isomorphic feature extraction component and the classification component, aiming at learning isomorphic features and classifying graph instances, respectively. In the graph isomorphic feature extraction component, ISONN automatically learns a group of subgraph templates of useful patterns from the input graph. ISONN makes use of a set of permutation matrices, which act as the node isomorphism mappings between the templates and the input graph. With the potential isomorphic features learned by all the permutation matrices and the templates, ISONN adopts one min-pooling layer to find the best node permutation for each template and one softmax layer to normalize and fuse all subgraph features learned by different kernels, respectively. Such features learned by different kernels will be fused together and fed as the input for the classification component. ISONN further adopts three fully-connected layers as the classification component to project the graph instances to their labels. Moreover, to accelerate the proposed model when dealing with large subgraphs, we also propose two variants of ISONN to gurantee the efficiency." }, { "heading": "2 RELATED WORK", "text": "Our work relates to subgraph mining, graph neural networks, network embedding as well as graph classification. We will discuss them briefly in the followings.\nSubgraph Mining and Graph Kernel Methods: Mining subgraph features from graph data has been studied for many years. The aim is to extract useful subgraph features from a set of graphs by adopting some specific criteria. One classic unsupervised method (i.e., without label information) is gSpan (Yan & Han, 2002), which builds a lexicographic order among graphs and map each graph to a unique minimum DFS code as its canonical label. GRAMI (Elseidy et al., 2014) only stores templates of frequent subgraphs and treat the frequency evaluation as a constraint satisfaction problem to find the minimal set. For the supervised model (i.e., with label information), CORK utilizes labels to guide the feature selection, where the features are generated by gSpan (Thoma et al., 2009). Due to the mature development of the sub-graph mining field, subgraph mining methods have also\nbeen adopted in life sciences (Mrzic et al., 2018). Moreover, several parallel computing based methods (Qiao et al., 2018; Hill et al., 2012; Lin et al., 2014) have proposed to reduce the time cost. On the other hand, graph kernel methods are also applied to discover the subgraph structures (Kashima et al., 2003; Vishwanathan et al., 2010; Gaüzere et al., 2012; Shervashidze et al., 2011). Among them, most existing works focus on the graph with node labels and the kernels methods only computes the similarity between pairwise graphs. Yet, in this paper, we are handling the graph without node labels. Moreover, we can not only compute the similarity between pairwise graphs but also learn subgraph templates, which can be further analyzed.\nGraph Neural Network and Network Embedding: Graph Neural Networks (Monti et al., 2017; Atwood & Towsley, 2016; Masci et al., 2015; Kipf & Welling, 2016; Battaglia et al., 2018) have been studied in recent years because of the prosperity of deep learning. Traditional deep models cannot be directly applied to graphs due to the special data structure. The general graph neural model MoNet (Monti et al., 2017) employs CNN architectures on non-Euclidean domains such as graphs and manifold. The GCN proposed in (Kipf & Welling, 2016) utilizes the normalized adjacency matrix to learn the node features for node classification; (Bai et al., 2018) proposes the multiscale convolutional model for pairwise graph similarity with a set matching based graph similarity computation. However, these existing works based on graph neural networks all fail to investigate the node-orderless property of the graph data and to maintain the explicit structural information. Another important topic related to this paper is network embedding (Bordes et al., 2013; Lin et al., 2015; Lai et al., 2017; Abu-El-Haija et al., 2018; Hamilton et al., 2017), which aims at learning the feature representation of each individual node in a network based on either the network structure or attribute information. Distinct from these network embedding works, the graph representation learning problem studied in this paper treats each graph as an individual instance and focuses on learning the representation of the whole graph instead.\nGraph Classification: Graph classification is an important problem with many practical applications. Data like social networks, chemical compounds, brain networks can be represented as graphs naturally and they can have applications such as community detection (Zhang et al., 2018), anti-cancer activity identification (Kong et al., 2013; Kong & Yu, 2010) and Alzheimer’s patients diagnosis (Tong et al., 2017; 2015) respectively. Traditionally, researchers mine the subgraphs by DFS or BFS (Saigo et al., 2009; Kong et al., 2013), and use them as the features. With the rapid development of deep learning (DL), many works are done based on DL methods. GAM builds the model by RNN with self-attention mechanism (Lee et al., 2018). DCNN extend CNN to general graph-structured data by introducing a ‘diffusion-convolution’ operation (Atwood & Towsley, 2016)." }, { "heading": "3 TERMINOLOGY AND PROBLEM DEFINITION", "text": "In this section, we will define the notations and the terminologies used in this paper and give the formulation for the graph classification problem." }, { "heading": "3.1 NOTATIONS", "text": "In the following sections, we will use lower case letters like x to denote scalars, lower case bold letters (e.g. x) to represent vectors, bold-face capital letters (e.g. X) to show the matrices. For tensors or sets, capital calligraphic letters are used to denote them. We use xi to represent the i-th element in x. Given a matrix X, we use X(i, j) to express the element in i-th row and j-th column. For i-th row vector and j-th column vector, we use X(i, :) and X(:, j) to denote respectively. Moreover, notations x> and X> denote the transpose of vector x and matrix X respectively. Besides, the F -norm of matrix X can be represented as ‖X‖F = ( ∑ i,j |Xi,j |2) 1 2 ." }, { "heading": "3.2 PROBLEM FORMULATION", "text": "Many real-world inter-connected data can be formally represented as the graph-structured data.\nDEFINITION 1 (Graph): Formally, a graph can be represented as G = (V, E), where the sets V and E denote the nodes and links involved in the graph, respectively. Some representative examples include the human brain graphs (where the nodes denote brain regions and links represent the correlations among these regions), biological molecule graphs (with the nodes represent the atoms and links denote the atomic bonds), as well as the geographical graphs in the offline world (where the nodes denote the communities and the links represent the commute\nroutes among communities). Meanwhile, many concrete real-world application problems, e.g., brain graph based patient disease diagnosis, molecule function classification and community vibrancy prediction can also be formulated as the graph classification problems.\nProblem Definition: Formally, given a graph set G = {G1, G2, · · · , Gn} with a small number of labeled graph instances, the graph classification problem aims at learning a mapping, i.e., f : G → Y , to project each graph instance into a pre-defined label space Y = {+1,−1}. In this paper, we will take the graph binary classification as an example to illustrate the problem setting for ISONN. A simple extension of the model can be applied to handle more complicated learning scenarios with multi-class or multi-label as well." }, { "heading": "4 PROPOSED METHOD", "text": "The overall architecture of ISONN is shown in Figure 1. The ISONN framework includes two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component includes a graph isomorphic layer, a minpooling layer as well as a softmax layer and the classification component is composed by three fully-connected layers. They will be discussed in detail in the following subsections." }, { "heading": "4.1 GRAPH ISOMORPHIC FEATURE EXTRACTION COMPONENT", "text": "Graph isomorphic feature extraction component targets at learning the graph features. To achieve that objective, ISONN adopts an automatic feature extraction strategy for graph representation learning. In ISONN, one graph isomorphic feature extraction component involves three layers: the graph isomorphic layer, the min-pooling layer and the softmax layer. In addition, we can further construct a deep graph isomorphic neural network by applying multiple isomorphic feature extraction components on top of each other, i.e., apply the combination of ”graph isomorphic layer, min pooling layer, softmax layer” several times. For the second and latter components, they will be used on every feature matrix learned by the combination of channels of all former components." }, { "heading": "4.1.1 GRAPH ISOMORPHIC LAYER", "text": "Graph isomorphic layer is the first effective layer in deep learning that handles the node-order restriction in graph representations. Assume we have a graph G = {V, E}, and its adjacency matrix to be A ∈ R|V|×|V|. In order to find the existence of specific subgraph patterns in the input graph, ISONN matches the input graph with a set of subgraph templates. Each template is denoted as a kernel variable Ki ∈ Rk×k,∀i ∈ {1, 2, · · · , c}. Here, k denotes the node number in subgraphs and c is the channel number (i.e., total template count). Meanwhile, to match one template with regions in the input graph (i.e., sub-matrices in A), we use a set of permutation matrices, which map both rows and columns of the kernel variable to the subgraphs effectively. The permutation matrix can be represented as P ∈ {0, 1}k×k that shares the same dimension with the kernel variable. Therefore, given a kernel Ki and a sub-matrix M(s,t) ∈ Rk×k in A (i.e., a region in the input graph G and s, t ∈ {1, 2, · · · , (|V|−k+ 1)} denotes a starting index pair in A), there may exist k! different such permutation matrices. The optimal should be the matrix P∗ that minimizes the following term.\nP∗ = arg min P∈P ∥∥PKiP> −M(s,t)∥∥2F , (1)\nwhere P = {P1,P2, · · · ,Pk!} covers all the potential permutation matrices. Formally, the isomorphic feature extracted based on the kernel Ki for the regional sub-matrix M(s,t) in A can be represented as\nzi,(s,t) = ∥∥P∗Ki(P∗)> −M(s,t)∥∥2F = min{∥∥PKiP> −M(s,t)∥∥2F }P∈P\n= min(z̄i,(s,t)(1 : k!)), (2)\nwhere vector z̄i,(s,t) ∈ Rk! contains entry z̄i,(s,t)(j) = ∥∥PjKiP>j −M(s,t)∥∥2F ,∀j ∈ {1, 2, · · · , k!} denoting the isomorphic features computed by the j-th permutation matrix Pj ∈ P . As indicated by the Figure 1, ISONN computes the final isomorphic features for the kernel variable Ki via two steps: (1) computing all the potential isomorphic features via different permutation matrices with the graph isomorphic layer, and (2) identifying and fusing the optimal features with the min-pooling layer and softmax layer to be introduced as follows. By shifting one kernel matrix Ki on regional sub-matrices, ISONN extracts the isomorphic features on the matrix A, which can be denoted as a 3-way tensor Z̄i ∈ Rk!×(|V|−k+1)×(|V|−k+1), where Z̄i(1 : k!, s, t) = z̄i,(s,t)(1 : k!). In a similar way, we can also compute the isomorphic feature tensors based on the other kernels, which can be denoted as Z̄1, Z̄2, · · · , Z̄c respectively." }, { "heading": "4.1.2 MIN-POOLING LAYER", "text": "Given the tensor Z̄i computed by Ki in the graph isomorphic layer, ISONN will identify the optimal permutation matrices via the min-pooling layer. Formally, we can represent results of the optimal permutation selection with Z̄i as matrix Zi:\nZi(s, t) = min{Z̄i(1 : k!, s, t)}. (3)\nThe min-pooling layer learns the optimal matrix Zi for kernel Ki along the first dimension (i.e., the dimension indexed by different permutation matrices), which can effectively identify the isomorphic features created by the optimal permutation matrices. For the remaining kernel matrices, we can also achieve their corresponding graph isomorphic feature matrices as Z1, Z2, · · · , Zc respectively." }, { "heading": "4.1.3 SOFTMAX LAYER", "text": "Based on the above descriptions, a perfect matching between the subgraph templates with the input graph will lead to a very small isomorphic feature, e.g., a value approaching to 0. If we feed the small features into the classification component, the useful information will vanish and the relative useless information (i.e., features learned by the subgraphs dismatch the kernels) dominates the learning feature vector in the end. Meanwhile, the feature values computed in Equation (3) can also be in different scales for different kernels. To effectively normalize these features, we propose to apply the softmax function to matrices Z1, Z2, · · · , Zc across all c kernels. Compared with the raw features, e.g., Zi, softmax as a non-linear mapping can also effectively highlight the useful features in Zi by rescaling them to relatively larger values especially compared with the useless ones. Formally, we can represent the fused graph isomorphic features after rescaling by all the kernels as a 3-way tensor Q, where slices along first dimension can be denoted as:\nQ(i, :, :) = Ẑi , where Ẑi = softmax(−Zi), ∀i ∈ {1, . . . , c}. (4)" }, { "heading": "4.2 CLASSIFICATION COMPONENT", "text": "After the isomorphic feature tensor Q is obtained, we feed it into a classification component. Let q denote the flattened vector representation of feature tensorQ, and we pass it to three fully-connected layers to get the predicted label vector ŷ. For the graph binary classification, suppose we have the ground truth y = (yg1 , y g 2) and the predicted label vector ŷ g = (ŷg1 , ŷ g 2) for the sample g from the training batch set B. We use cross-entropy as the loss function in ISONN. Formally, the fullyconnected (FC) layers and the objective function can be represented as follows respectively:\nFC Layers: { d1 = σ(W1q + b1), d2 = σ(W2d1 + b2), ŷ = σ(W3d2 + b3), Objective Function: L = − ∑ g∈B 2∑ j=1 ygj log ŷ g j , (5)\nwhere Wi and bi represent the weights and biases in i-th layer respectively for i ∈ {1, 2, 3}. The σ denotes the adopted the relu activation function. To train the proposed model, we adopt the back propagation algorithm to learn both the subgraph templates and the other involved variables." }, { "heading": "4.3 MORE DISCUSSIONS ON GRAPH ISOMORPHIC FEATURE EXTRACTION IN ISONN", "text": "Before introducing the empirical experiments to test the effectiveness of ISONN, we would like to provide more discussions about the computation time complexity of the graph isomorphic feature extraction component involved in ISONN. Formally, given the input graph G with n = |V| nodes, by shifting the kernel variables (of size k × k) among the dimensions of the corresponding graph adjacency matrix, we will be able to obtain (n−k+1)2 regional sub-matrices (orO(n2) regional submatrices for notation simplicity). Here, we assume ISONN has only one isomorphic layer involving c different kernels. In the forward propagation, the introduced time cost in computing the graph isomorphic features can be denoted as O(ck!k3n2), where term k! is introduced in enumerating all the potential permutation matrices and k3 corresponds to the matrix multiplication time cost.\nAccording to the notation, we observe that n is fixed for the input graph. Once the kernel channel number c is decided, the time cost notation will be mainly dominated by k. To lower down the above time complexity notation, in this part, we propose to further improve ISONN from two perspectives: (1) compute the optimal permutation matrix in a faster manner, and (2) use deeper model architectures with small-sized kernels." }, { "heading": "4.3.1 FAST PERMUTATION MATRIX COMPUTATION", "text": "Instead of enumerating all the permutation matrices in the graph isomorphic feature extraction as indicated by Equations (2)-(3), here we introduce a fast way to compute the optimal permutation matrix for the provided kernel variable matrix, e.g., Ki, and input regional sub-matrix, M(s,t), directly according to the following theorem.\nTHEOREM 1 Formally, let the kernel variable Ki and the input regional sub-matrix M(s,t) be k×k real symmetric matrices with k distinct eigenvalues α1 > α2 > · · · > αk and β1 > β2 > · · · > βk, respectively, and their eigendecomposition be represented by\nKi = UKiΛKiU > Ki , and M(s,t) = UM(s,t)ΛM(s,t)U > M(s,t)\n(6)\nwhere UKi and UM(s,t) are orthogonal matrices of eigenvectors and ΛKi = diag(αj),ΛM(s,t) = diag(βj). The minimum of ||PKiP> −M(s,t)||2 is attained for the following P’s:\nP∗ = UM(s,t)SU > Ki (7)\nwhere S ∈ S = {diag(s1, s2, · · · , sk)|si = 1 or − 1}. The proof of the theorem will be provided in appendix. In computing the optimal permutation matrix P∗, trials of different S will be needed. Meanwhile, to avoid such time costs, we introduce to take the upper bound value of UM(s,t)SU > Ki\nas the approximated optimal permutation matrix instead, which together with the corresponding optimal feature zi,(s,t) can be denoted as follows:\nP∗ = |UM(s,t) ||U > Ki | and zi,(s,t) = ||P ∗K(P∗)> −M(s,t)||2, (8)\nwhere |·| denotes the absolute value operator and |UM(s,t) ||U>Ki | ≥ UM(s,t)SU > Ki hold for ∀S ∈ S. By replacing Equations (2)-(3) with Equation (7), we can compute the optimal graph isomorphic feature for the kernel Ki and input regional sub-matrix M(s,t) with a much lower time cost. Furthermore, since the eigendecomposition time complexity of a k × k matrix is O(k3), based on the above theorem, we will be able to lower down the total time cost in graph isomorphic feature extraction to O(ck3n2), which can be optimized with the method introduced in the following subsection." }, { "heading": "4.3.2 DEEP GRAPH ISOMORPHIC FEATURE EXTRACTION", "text": "Since graph isomorphic layer is the main functional layer, we simply use multi-layer for short to denote the mutiple graph isomorphic feature extraction components (i.e., deep model). We also provide an example of a deep model in appendix. Here, we will illustrate the advantages of deep ISONN model with small-sized kernels compared against shallow ISONN model with large kernels. In Figure 2, we provide an example two ISONN models with different model architectures\n• the left model has one single layer and 6 kernels, where the kernel size k = 12; • the right model has two layers: layer 1 involves 2 kernels of size 3, and layer 2 involves 3\nkernels of size 4.\nBy comparing these two different models, we observe that they have identical representation learning capacity. However, the time cost in feature extraction introduced by the left model is much higher than that introduced by the right model, which can be denoted as O(6(123)n2) and O(2(33)n2 + 3(43)n2), respectively.\nTherefore, for the ISONN model, we tend to use small-sized kernels. Formally, according to the fast method provided in the previous part, given a 1-layer ISONN model with c large kernels of size k, its graph isomorphic feature extraction time complexity can be denoted as O(ck3n2). Inspired by Figure 2, without affecting the representation capacity, such a model can be replaced by a max{dlogc2e , ⌈ logk3 ⌉ }-layers deep ISONN model instead, where each layer involves 2 kernels of size 3. The graph isomorphic feature extraction time complexity of the deep model will be O ( (max{dlogc2e , ⌈ logk3 ⌉ }) · 2 · 33n2 ) (or O ( (max{dlogce , ⌈ logk ⌉ }) · n2 ) for simplicity)." }, { "heading": "5 EXPERIMENTS", "text": "To evaluate the performance of ISONN, in this section, we will talk about the experimental settings as well as five benchmark datasets. Finally, we will discuss the experimental results with parameter analyses on kernel size , channel number and time complexity." }, { "heading": "5.1 EXPERIMENTAL SETTINGS", "text": "In this subsection, we will use five real-world benchmark datasets: HIV-fMRI (Cao et al., 2015a), HIV-DTI (Cao et al., 2015a), BP-fMRI (Cao et al., 2015b), MUTAG1 and PTC1. Both HIV-fMRI and HIV-DTI have 56 positive instances and 21 negative instances. Also, graph instances in both of them are represented as 90 × 90 matrices (Cao et al., 2015a). BP-fMRI has 52 positive and 45 negative instances and each instance is presented by an 82× 82 matrix Cao et al. (2015b). MUTAG and PTC are two datasets which have been widely used in academia Xu et al. (2018); Shervashidze et al. (2011). MUTAG has 125 positive and 63 negative graph instances with graph size 28 × 28. PTC is a relative large dataset, which has 152 positive and 192 negative graph instances with graph size 109 × 109. With these datasets, we first introduce the comparison methods used in this paper and then talk about the experimental setups and the adopted evaluation metrics in detail." }, { "heading": "5.1.1 COMPARISON METHODS", "text": "• ISONN & ISONN-fast : The proposed method ISONN uses a set of template variables as well as the permutation matrices to extract the isomorphic features and feed these features to the classification component. The variant model named ISONN-fast uses the Equation 8 to compute the optimal permutation matrices and other settings remain unchanged.\n• Freq: The method uses the top-k frequent subgraphs as its features. This is also an unsupervised feature selection method based on frequency.\n• AE: We use the autoencoder model (AE) (Vincent et al., 2010) to get the features of graphs without label information. It is an unsupervised learning method, which learns the latent representations of connections in the graphs without considering the structural information.\n• CNN: It is the convolutional model (Krizhevsky et al., 2012) learns the structural information within small regions of the whole graph. We adopt one convolution layer and three fully-connected layer to be the classification module.\n1https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/" }, { "heading": "5.1.2 EXPERIMENTAL SETUP AND EVALUATION METRICS", "text": "In our experiments, to make the results more reliable, we partition the datasets into 3 folds and then set the ratio of train/test according to 2 : 1, where two folds are treated as the training data and the remaining one is the testing data. We select top-100 features for Freq as stated in (Wang et al., 2017) with a three layer MLP classifier, where the neuron numbers are 1024, 128, 2. For Auto-encoder, we apply the two-layer encoder and two-layer decoder. For the CNN, we apply the one convolutional layer with the size 5 × 5 × 50, a max-pooling layer with kernel size 2 × 2, one gating relu layer as activation layer and we set parameters in the classification module the same as Freq classifier. For the SDBN, we set the architecture as follows: we use two layers of ”convolution layer + max pooling layer + activation layer ” and concatenate a fully connected layer with 100 neurons as well as an activation layer, where the parameters are the same as those in CNN. We also set the dropout rate in SDBN being 0.5 to avoid overfitting. For WL kernel, if the average similarity score for one test graph greater than 0.5, we assign the test graph positive label, otherwise, assign negative label. We follow the setting in (Kipf & Welling, 2016) and (Xu et al., 2018) to do GCN and GIN-0. Here, to make a fair comparison, we will use the adjacency matrices as features (i.e., no node label information) for WL, GCN and GIN. In the experiments, we set the kernel size k in the isomorphic layer for three datasets as 2, 4, 3, 4, 4, respectively, and then set the parameters in classification component the same as those in Freq classifier. In this experiment, we adopt Adam optimizer and the set the learning rate η = 0.001, and then we report the average results on balanced datasets." }, { "heading": "5.2 EXPERIMENTAL RESULTS", "text": "In this section, we investigate the effectiveness of the learned subgraph-based graph feature representations for graphs. We adopt one isomorphic layer where the kernel size k = 2 and channel number c = 3 for HIV-fMRI, one isomorphic layer with (k = 4, c = 2), (k = 3, c = 1), (k = 4, c = 1) and (k = 4, c = 2) for the HIV-DTI, BP-fMRI, MUTAG and PTC, respectively. The results are shown in Table 1. From that table, we can observe that ISONN outperforms all other baseline methods on these all datasets. We need to remark that ISONN and ISONN-fast are very close on MUTAG, and ISONN has the best performance in total on PTC. Compared with Freq, the proposed method achieves a better performance without searching for all possible subgraphs manually. AE has almost the worst performance among all comparison methods. This is because the features learned from\nAE do not contain any structural information. For HIV-DTI, AE gets 0 in F1. This is because the dataset contains too many zeros, which makes the AE learns trivial features. Also, for PTC, its F1 is higher than other models, but the accuracy only get 50.0, which indicates AE actually have a bad performance since it cannot discriminate the classes of the instances (i.e., predicting all positive classes). CNN performs better than AE but still worse than ISONN. The reason can be that it learns some structural information but fails to learn representative structural patterns. SDBN is designed for brain images, so it may not work for MUTAG and PTC. One possible reason for WL got bad results is the isomorphism test is done on the whole graph, which may lead to erratic results. GCN performs better than GIN but worse than ISONN, showing that GCN can learn some sturctual information without node labels, but GIN cannot work with the adjacency matrix as input. ISONN-fast achieves the best scores on MUTAG and second-best on HIV-fMRI, yet worse than several other methods on other datasets. This can be the approximation on P may impair the performance. Comparing ISONN with AE, ISONN achieves better results. This means the structural information is more important than only connectivity information for the classification problem. If compared with CNN, the results also show the contribution of breaking the node-order in learning the subgraph templates. Similar to SDBN, ISONN also finds the features from subgraphs, but ISONN gets better performance with more concise architecture. Contrasting with GCN and GIN, ISONN can maintain the explict subgraph structures in graph representations, while the GCN and GIN simply use the aggragation of the neighboring node features, losing the graph-level substructure infomation." }, { "heading": "5.3 PARAMETER ANALYSIS", "text": "To further study the proposed method, we will discuss the effects of different kernel size and channel number in ISONN. The model convergence analysis will be provided in appendix.\n• Kernel Size: We show the effectiveness of different k in Figure 3. Based on the previous statement, parameter k can affect the final results since it controls the size of learned subgraph templates. To investigate the best kernel size for each dataset, we fix the channel number c = 1. As Figure 3 shows, different datasets have different appropriate kernel sizes. The best kernel sizes are 2, 4, 3, 4, 4 for the three datasets, respectively. • Channel Number: We also study the effectiveness of multiple channels (i.e., multiple\ntemplates in one layer). To discuss how the channel number influences the results, we choose the best kernel size for each dataset (i.e., 2, 4, 3, 4, 4 respectively). From all subfigures in Figure 4, we can see that the differences among the different channel numbers by using only one isomorphic layer. As shown in Figure 4, ISONN achieves the best results by c = 3, 2, 1, 1, 2, respectively, which means the increase of the channel number can improve the performance, but more channels do not necessarily lead to better results. The reason could be the more templates we use, the more complex our model would be. With such a complex model, it is easy to learn an overfitting model on train data, especially when the dataset is quite small. Thus, increasing the channel number can improve the performance but the effectiveness will still depend on the quality and the quantity of the dataset." }, { "heading": "5.4 TIME COMPLEXITY STUDY", "text": "To study the efficiency of ISONN and ISONN-fast, we collect the actual running time on training model, which is shown in Figure 5. In both Figures 5(a) and 5(b) 2, the x-axis denotes its value for k or c and the y-axis denotes the time cost with different parameters. From Figure 5(a), four lines show the same pattern. When the k increases, the time cost grows exponentially. This pattern can be directly explained by the size of the permutation matrix set. When we increase the kernel size by one, the number of corresponding permutation matrices grows exponentially. While changing c, shown in Figure 5(b), it is easy to observe that those curves are basically linear with different slopes. This is also natural since whenever we add one channel, we only need to add a constant number of the permutation matrices. To study the efficiency of ISONN-fast, Figure 5(c) shows the running times of ISONN and ISONN-fast on MUTAG. As it shows, ISONN-fast use less time when the kernel size greater than 4, otherwise ISONN and ISONN will have little difference since the eigen decomposition has nearly the same time complexity as calculating all possible node permutaions. The results also verify the theoretical time complexity analysis in 4.3." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a novel graph neural network named ISONN to solve the graph classification problem. ISONN consists of two components: (1) isomorphic component, where a set of permutation matrices is used to break the randomness order posed by matrix representation for a bunch of templates and one min-pooling layer and one softmax layer are used to get the best isomorphic features, and (2) classification component, which contains three fully-connected layers. We further discuss the two efficient variants of ISONN to accelerate the model. Next, we perform the experiments on five real-world datasets. The experimental results show the proposed method outperforms all comparison methods, which demonstrates the superiority of our proposed method. The experimental analysis on time complexity illustrates the efficiency of the ISONN-fast.\n2Since the PTC is a relative large dataset compared with the others, its running time is in different scale compared with the other datasets, which makes the time growth curve of other datasets not obvious. Thus, we don’t show the results on PTC." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 PROOF OF THEOREM 1 AND DISCUSSION ABOUT EQUATION (8)", "text": "Before giving the proof of Theorem 1, we need to introduce Lemma 1 first.\nLEMMA 1 If A and B are Hermitian matrices with eigenvalues α1 ≥ α2 ≥ · · · ≥ αn and β1 ≥ β2 ≥ · · · ≥ βn respectively, then ||A−B|| ≥ ∑n i=1(αi − βi)2.\nBased on Lemma 1, we can derive the proof of Theorem 1 as follows.\nPROOF 1 From Lemma 1, Equation 9 holds for any orthogonal matrix R since the eigenvalues of RKiR > are the same as those of Ki.\n||RKiR> −M(s,t)||2 ≥ n∑\nj=1\n(αj − βj)2 (9)\nOn the other hand, if we use P in 7, we have\n||PKiP> −M(s,t)||2 = ||UM(s,t)SU>KiUKiΛKiU > Ki UKiSU > M(s,t) −UM(s,t)ΛM(s,t)U>M(s,t) || 2\n= ||UM(s,t)(SΛKiS−ΛM(s,t))U>M(s,t) || 2\n= ||SΛKiS−ΛM(s,t) ||2\n= ||ΛKi −ΛM(s,t) ||2\n= ∑n\nj=1(αj − βj)2 (10)\nwhere we use the equations that ||UX|| = ||UX>|| = ||X|| for any orthogonal matrix U and SΛKiS = S 2ΛKi = ΛKi since S and ΛKiare both orthogonal matrices and S 2 = I.\nMoreover, it is clear that\ntr(P>UM(s,t)SU > Ki) ≤ tr ( P>|UM(s,t) ||U > Ki | )\n(11)\nbecause of the elements in S are either −1 or +1. Also, since each row vector of UM(s,t) and UKi is unit vector, thus we can have\ntr(P>|UM(s,t) ||U > Ki |) ≤ n (12)\nIf there exists a perfect permutaion matrix, P∗, then there exists such S∗, s.t.\ntr(P∗>UM(s,t)S ∗U>Ki) = tr(P ∗>P∗) = n (13)\nThus, based on Equation (11), Equation (12) and Equation (13), we can get\ntr(P∗>|UM(s,t) ||U > Ki |) ≤ n. (14)\nThis means that P maximizes tr(P>|UM(s,t) ||U>Ki |) since tr(P >|UM(s,t) ||U>Ki |) for any permutation matrix P. Therefore, when Ki and M(s,t) are isomorphic, the optimal permutation matrix can be obtained as a permutation matrix P which maximizes tr(P>|UM(s,t) ||U>Ki |). Therefore, we take P∗ = |UM(s,t) ||U>Ki | directly." }, { "heading": "7.2 AN EXAMPLE FOR DEEP ISOMORPHIC NEURAL NETWORK", "text": "To better illustrate the idea of our deep model, we also provide the model architecture involves two graph isomorphic feature extraction components. Suppose the kernel size for the first graph isomorphic layer is k1 with c channels, whereas the kernel size of the second graph isomorphic layer is k2 with m channels. The model is shown in Figure 6. After the first graph isomorphic feature extraction component, we get the first feature tensor Q1 and each element in Q1 denotes matching\nscore between one subgraph to one kernel template. Thus, we can also regard each element in Q1 as a kernel template. Since we have c channel in the first component, the second component will be used on every channel of Q1. If the channel number of the second component is m, then the first dimension of the learned feature tensor Q2 of the second component is c ∗m. For a deeper model with 3 or more graph isomorphic feature extraction components, our will do similar operations to the second isomorphic components. The first dimension of the final tensor Q will be the product of channels in all former graph isomorphic layers." }, { "heading": "7.3 CONVERGENCE ANALYSIS", "text": "The Figure 7 shows the convergence trend of ISONN on five datasets, where the x-axis denotes the epoch number and the y-axis is the training loss, respectively. From these sub-figures, we can know that the proposed method can achieve a stable optimal solution within 50 iterations except for MUTAG (it needs almost 130 epochs to converge), which also illustrates our method would converge relatively fast." } ]
2,019
null
SP:86076eabb48ef1fe9d51b54945bf81ed44bcacd7
[ "This paper list several limitations of translational-based Knowledge Graph embedding methods, TransE which have been identified by prior works and have theoretically/empirically shown that all limitations can be addressed by altering the loss function and shifting to Complex domain. The authors propose four variants of loss function which address the limitations and propose a method, RPTransComplEx which utilizes their observations for outperforming several existing Knowledge Graph embedding methods. Overall, the proposed method is well motivated and experimental results have been found to be consistent with the theoretical analysis.", "In this paper, the authors investigate the main limitations of TransE in the light of loss function. The authors claim that their contributions consist of two parts: 1) proving that the proper selection of loss functions is vital in KGE; 2) proposing a model called TransComplEx. The results show that the proper selection of the loss function can mitigate the limitations of TransX (X=H, D, R, etc) models." ]
Knowledge graphs (KGs) represent world’s facts in structured forms. KG completion exploits the existing facts in a KG to discover new ones. Translation-based embedding model (TransE) is a prominent formulation to do KG completion. Despite the efficiency of TransE in memory and time, it is claimed that TransE suffers from several limitations in encoding relation patterns (such as symmetric, reflexive relations etc). To solve that, most of the attempts have circled around the revision of the TransE score function, resulting more complicated score functions (such as TransA/D/G/H/R etc). These attempts have totally disregarded effect of loss functions to this end. We show that loss functions are key factors in this regard and disregarding them results in conclusions which are inaccurate or even wrong. More concretely, we show that the claimed limitations are inaccurate, as the effect of the loss was ignored. In this regard, we pose theoretical investigations of the main limitations of TransE in the light of loss function. To the best of our knowledge, so far, this has not been comprehensively investigated. We show that by a proper selection of the loss function for training the TransE model, the main limitations are mitigated. That is achieved by setting upper-bound for the scores of positive samples, defining the region of truth (i.e. the region that a triple is considered positive by the model). Our theoretical proofs with experimental results fill the gap in understanding of the limitation of translation-based class of embedding models and confirm the importance of the selection of loss functions for training the models and on their performance.
[]
[ { "authors": [ "Farahnaz Akrami", "Lingbing Guo", "Wei Hu", "Chengkai Li" ], "title": "Re-evaluating embedding-based knowledge graph completion methods", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 }, { "authors": [ "Kurt Bollacker", "Colin Evans", "Praveen Paritosh", "Tim Sturge", "Jamie Taylor" ], "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "venue": "In Proceedings of the 2008 ACM SIGMOD international conference on Management of data,", "year": 2008 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Tim Dettmers", "Pasquale Minervini", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Convolutional 2d knowledge graph embeddings", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Boyang Ding", "Quan Wang", "Bin Wang", "Li Guo" ], "title": "Improving knowledge graph embedding using simple constraints", "venue": "arXiv preprint arXiv:1805.02408,", "year": 2018 }, { "authors": [ "Takuma Ebisu", "Ryutaro Ichise" ], "title": "Toruse: Knowledge graph embedding on a lie group", "venue": "In ThirtySecond AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Takuma Ebisu", "Ryutaro Ichise" ], "title": "Generalized translation-based embedding of knowledge graph", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Bahare Fatemi", "Siamak Ravanbakhsh", "David Poole" ], "title": "Improved knowledge graph embedding using background taxonomic information", "venue": "arXiv preprint arXiv:1812.03235,", "year": 2018 }, { "authors": [ "Jun Feng", "Minlie Huang", "Mingdong Wang", "Mantong Zhou", "Yu Hao", "Xiaoyan Zhu" ], "title": "Knowledge graph embedding by flexible translation", "venue": "In Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning,", "year": 2016 }, { "authors": [ "Shu Guo", "Quan Wang", "Lihong Wang", "Bin Wang", "Li Guo" ], "title": "Jointly embedding knowledge graphs and logical rules", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Shu Guo", "Quan Wang", "Lihong Wang", "Bin Wang", "Li Guo" ], "title": "Knowledge graph embedding with iterative guidance from soft rules", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Frank L Hitchcock" ], "title": "The expression of a tensor or a polyadic as a sum of products", "venue": "Journal of Mathematics and Physics,", "year": 1927 }, { "authors": [ "Guoliang Ji", "Shizhu He", "Liheng Xu", "Kang Liu", "Jun Zhao" ], "title": "Knowledge graph embedding via dynamic mapping matrix", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),", "year": 2015 }, { "authors": [ "Seyed Mehran Kazemi", "David Poole" ], "title": "Simple embedding for link prediction in knowledge graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yankai Lin", "Zhiyuan Liu", "Huanbo Luan", "Maosong Sun", "Siwei Rao", "Song Liu" ], "title": "Modeling relation paths for representation learning of knowledge bases", "venue": "arXiv preprint arXiv:1506.00379,", "year": 2015 }, { "authors": [ "Yankai Lin", "Zhiyuan Liu", "Maosong Sun", "Yang Liu", "Xuan Zhu" ], "title": "Learning entity and relation embeddings for knowledge graph completion", "venue": "In Twenty-ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "Hanxiao Liu", "Yuexin Wu", "Yiming Yang" ], "title": "Analogical inference for multi-relational embeddings", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "George A Miller" ], "title": "Wordnet: a lexical database for english", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Pasquale Minervini", "Luca Costabello", "Emir Munoz", "Novacek", "Pierre-Yves Vandenbussche" ], "title": "Regularizing knowledge graph embeddings via equivalence and inversion axioms", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2017 }, { "authors": [ "Mojtaba Nayyeri", "Sahar Vahdati", "Jens Lehmann", "Hamed Shariat Yazdi" ], "title": "Soft marginal transe for scholarly knowledge graph completion", "venue": null, "year": 1904 }, { "authors": [ "Dat Quoc Nguyen", "Kairit Sirts", "Lizhen Qu", "Mark Johnson" ], "title": "Stranse: a novel embedding model of entities and relationships in knowledge bases", "venue": "arXiv preprint arXiv:1606.08140,", "year": 2016 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": null, "year": 1902 }, { "authors": [ "Kristina Toutanova", "Danqi Chen" ], "title": "Observed versus latent features for knowledge base and text inference", "venue": "In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality,", "year": 2015 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Quan Wang", "Zhendong Mao", "Bin Wang", "Li Guo" ], "title": "Knowledge graph embedding: A survey of approaches and applications", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Yanjie Wang", "Rainer Gemulla", "Hui Li" ], "title": "On multi-relational link prediction with bilinear models", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Zhen Wang", "Jianwen Zhang", "Jianlin Feng", "Zheng Chen" ], "title": "Knowledge graph embedding by translating on hyperplanes", "venue": "In Twenty-Eighth AAAI conference on artificial intelligence,", "year": 2014 }, { "authors": [ "Xiaofei Zhou", "Qiannan Zhu", "Ping Liu", "Li Guo" ], "title": "Learning knowledge embeddings by combining limit-based scoring loss", "venue": "In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management,", "year": 2017 }, { "authors": [ "Xiaofei Zhou", "Qiannan Zhu", "Ping Liu", "Li Guo" ], "title": "Learning knowledge embeddings by combining limit-based scoring loss", "venue": "In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Knowledge is considered as commonsense facts and other information accumulated from different sources. A Knowledge Graph (KG) is collection of facts and is usually represented as a set of triples (h, r, t) where h, t are entities and r is a relation, e.g. (iphone, hyponym, smartphone). Entities and relations are nodes and edges in the graph, respectively. As KGs are inherently incomplete, making prediction of missing links is a fundamental task in knowlege graph analyses. Among different approaches used for KG completion, KG Embedding (KGE) has recently received growing attentions. KGE embeds entities and relations as low dimensional vectors known as embeddings. To measure the degree of plausibility of a triple, a scoring function is defined over the embeddings.\nTransE, Translation-based Embedding model, (Bordes et al., 2013) is one of the most widely used KGE models. The original assumption of TransE is to hold: h + r = t, for every positive triple (h, r, t) where h, r, t ∈ Rd are embedding vectors of head (h), relation (r) and tail (t) respectively. TransE and its many variants like TransH (Wang et al., 2014) and TransR (Lin et al., 2015b), underperform greatly compared to the current state-of-the-art models. That is reported to be due to the limitations of their scoring functions. For instance, (Wang et al., 2018) reports that TransE cannot encode a relation pattern which is neither reflexive nor irreflexive.\nIn most of these works the effect of the loss function is ignored and the provided proofs are based on the assumptions that are not fulfilled by the associated loss functions. For instance (Sun et al., 2019) proves that TransE is incapable of encoding symmetric relation. To this end the loss function must enforce the distance of ‖h + r− t‖ to zero, but this is never fulfilled (or even approximated) by the employed loss function. Similarly, (Wang et al., 2018) reports that TransE cannot encode a relation pattern which is neither reflexive nor irreflexive and (Wang et al., 2014) adds that TransE cannot\nproperly encode reflexive, one-to-many, many-to-one and many-to-many relations. However, as mentioned earlier, such reported limitations are not accurate and the problem is not fully investigated due to the effect of the loss function.\nIn this regards, although TransH, TransR and TransD (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) addressed the reported problem of TransE in one-to-many, many-to-one, many-to-many and reflexive etc, they were misled by the assumption (enforcing ‖h + r− t‖ to be zero) that was not fulfilled by the employed loss function. Considering the same assumption, (Kazemi & Poole, 2018) investigated three additional limitations of TransE, FTransE (Feng et al., 2016), STransE (Nguyen et al., 2016), TransH and TransR models: (i) if the models encode a reflexive relation r, they automatically encode symmetric, (ii) if the models encode a reflexive relation r, they automatically encode transitive and, (iii) if entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆, then e2 must have the relation r with every entity in ∆.\nAssuming that the loss function enforces the norm to be zero, the aforementioned works have investigated these limitations by focusing on the capability of scoring functions in encoding relation patterns. However, we prove that the selection of loss function affects the boundary of score functions; consequently, the selection of loss functions significantly affects the limitations. Therefore, the above mentioned theories corresponding to the limitations of translation-based embedding models in encoding relation patterns are inaccurate. We pose new theories about the limitations of TransX(X=H,D,R, etc) models considering the loss functions. To the best of our knowledge, it is the first time that the effect of loss function is investigated to prove theories corresponding to the limitations of translation-based models.\nIn a nutshell, the key contributions of this paper is as follows. (i) We show that different loss functions enforce different upper-bounds and lower-bounds for the scores of positive and negative samples respectively. This implies that existing theories corresponding the limitation of TransX models are inaccurate because the effect of loss function is ignored. We introduce new theories accordingly and prove that the proper selection of loss functions mitigates the main limitations. (ii) We reformulate the existing loss functions and their optimization problems as an standard constrained optimization problem. This makes perfectly clear that how each of the loss functions affect on the boundary of triples scores and consequently ability of relation pattern encoding. (iii) Using symmetric relation patterns, we obtain a proper upper-bound of positive triples score to enable encoding of symmetric patterns. (iv) We additionally investigate the theoretical capability of translation-based embedding model when translation is applied in the complex space (TransComplEx). We show that TransComplEx is a more powerful embedding model with fewer theoretical limitations in encoding different relation patterns such as symmetric while it is efficient in memory and time." }, { "heading": "2 RELATED WORKS", "text": "Most of the previous work have investigated the capability of translation-based class of embedding models considering solely the formulation of the score function. Accordingly, in this section, we review the score functions of TransE and some of its variants together with their capabilities. Then, in the next section the existing limitations of Translation-based embedding models emphasized in recent works are reviewed. These limitations will be reinvestigated in the light of score and loss functions in the section 4.\nThe score of TransE (Bordes et al., 2013) is defined as: fr(h, t) = ‖h + r − t‖. TransH (Wang et al., 2014) projects each entity (e) to the relation space (e⊥ = e − wrewTr ). The score function is defined as fr(h, t) = ‖h⊥ + r − t⊥‖. TransH can encode reflexive, one-to-many, many-to-one and many-to-many relations. However, recent theories (Kazemi & Poole, 2018) prove that encoding reflexive results in encoding the both symmetric and transitive which is undesired. TransR (Lin et al., 2015b) projects each entity (e) to the relation space by using a matrix provided for each relation (e⊥ = eMr, Mr ∈ Rde×dr ). TransR uses the same scoring function as TransH. TransD (Ji et al., 2015) provides two vectors for each individual entities and relations (h,hp, r, rp, t, tp). Head and tail entities are projected by using the following matrices: Mrh = rTp hp + I m×n,Mrt = rTp tp + I m×n. The score function of TransD is similar to the score function of TransH.\nRotatE (Sun et al., 2019) rotates the head to the tail entity by using relation. RotatE embeds entities and relations in the Complex space. By inclusion of constraints on the norm of entity vectors, the\nmodel would be degenerated to TransE. The scoring function of RotatE is fr(h, t) = ‖h ◦ r − t‖, where h, r, t ∈ Cd, and ◦ is element-wise product. RotatE obtains the state-of-the-art results using very big embedding dimension (1000) and a lot of negative samples (1000). TorusE (Ebisu & Ichise, 2018) fixes the problem of regularization in TransE by applying translation on a compact Lie group. The model has several variants including mapping from torus to Complex space. In this case, the model is regarded as a very special case of RotatE Sun et al. (2019) that applies rotation instead of translation in the target the Complex space. According to Sun et al. (2019), TorusE is not defined on the entire Complex space. Therefore, it has less representation capacity. TorusE needs a very big embedding dimension (10000 as reported in Ebisu & Ichise (2018)) which is a limitation." }, { "heading": "3 THE MAIN LIMITATIONS OF TRANSLATION-BASED EMBEDDING MODELS", "text": "We review six limitations of translation-based embedding models in encoding relation patterns (e.g. reflexive, symmetric) mentioned in the literature (Wang et al., 2014; Kazemi & Poole, 2018; Wang et al., 2018; Sun et al., 2019).\nLimitation L1. TransE cannot encode reflexive relations when the relation vector is non-zero (Wang et al., 2014).\nLimitation L2. TransE cannot encode a relation which is neither reflexive nor irreflexive. To see that, if TransE encodes a relation r, which is neither reflexive nor irreflexive we have h1 + r = h1 and h2 + r 6= h2, resulting r = 0, r 6= 0 which is a contradiction (Wang et al., 2018). Limitation L3. TransE cannot properly encode symmetric relation when r 6= 0. To see that (Sun et al., 2019), if r is symmetric, then we have: h + r = t and t + r = h. Therefore, r = 0 and so all entities appeared in head or tail parts of training triples will have the same embedding vectors.\nThe following limitations hold for TransE, FTransE, STransE, TransH and TransR (Feng et al., 2016; Nguyen et al., 2016; Kazemi & Poole, 2018):\nLimitation L4. If r is reflexive on ∆ ∈ E , where E is the set of all entities in the KG, then r must also be symmetric.\nLimitation L5. If r is reflexive on ∆ ∈ E , r must also be transitive. Limitation L6. If entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆, then e2 must have the relation r with every entity in ∆." }, { "heading": "4 OUR MODEL", "text": "TransE and its variants underperform compared to other embedding models due to their limitations we iterated in Section 3. In this section, we reinvestigate the limitations. We show that the corresponding theoretical proofs are inaccurate because the effect of loss function is ignored. So we propose new theories and prove that each of the limitations of TransE are resolved by revising either the scoring function or the loss. In this regard, we consider several loss functions and their effects on the boundary of the TransE scoring function. For each of the loss functions, we pose theories corresponding to the limitations. We additionally investigate the limitations of TransE using each of the loss functions while translation is performed in the complex space and show that by this new approach the aforementioned limitations are lifted. Our new model, TransComplEx, with a proper selection of loss function addresses the above problems." }, { "heading": "4.1 TRANSCOMPLEX: TRANSLATIONAL EMBEDDING MODEL IN THE COMPLEX SPACE", "text": "TransComplEx translates head entity vector to the conjugate of the tail vector using relation in the complex space (Trouillon et al., 2016). Assuming h, r, t ∈ Cd be complex vectors of dimension d, the score function is defined as follows:\nfr(h, t) = ‖h + r− t̄‖\nAdvantages of TransComplEx: We highlight four advantages of using the above formulation. (i) Comparing to TransE and its variants, TransComplEx has less limitations in encoding different relation patterns. The theories and proofs are provided in the next part. (ii) Using conjugate of tail\nvector in the formulation enables the model to make difference between the role of an entity as subject or object. This cannot be properly captured by TransE and its variants. (iii) Given the example (A,Like, Juventus), (Juventus, hasP layer, C.Ronaldo), that C.Ronaldo plays for Juventus may affect the person A to like the team. This type of information cannot be properly captured by models such as CP decomposition (Hitchcock, 1927) where two independent vectors are provided (Kazemi & Poole, 2018) for Juventus (for subject and object). In contrast, our model uses same real and imaginary vectors for Juventus when it is used as subject or object. Therefore, TransComplEx can properly capture dependency between the two triples with the same entity used as subject and object. And finally, (iiii) ComplEx (Trouillon et al., 2016) has much more computational complexity comparing to TransComplEx because it needs to compute eight vector multiplications to obtain score of a triple while our model only needs to do four vector summation/subtractions. In the experiment section, we show that TransComplEx outperforms ComplEx on various dataset." }, { "heading": "4.2 REINVESTIGATION OF THE LIMITATIONS OF TRANSLATION-BASED MODELS", "text": "The aim of this part is to analyze the limitations of Translation-based embedding models (including TransE and TransComplEx) by considering the effect of both score and loss functions. Different loss functions provide different upper-bound and lower-bound for positive and negative triples scores, respectively. Therefore, the loss functions affect the limitations of the models to encode relation patterns. In this regard, the existing works consider a positive triple of (h, r, t) and a negative triple of (h′, r, t′) to satisfy some assumptions in a score function. For instance in TransE where fr(h, t) = ‖h + r − t‖, it is expected that fr(h, t) = 0 and fr(h′, t′) > 0. Unfortunately, as we show later, this can not be fulfilled (or even approximated) by using the proposed loss functions (e.g. margin ranking loss and RotatE loss).\nTo investigate and address the limitations, we propose four conditions (Table-1) for taking a triple as positive or negative by the score function. This is done by defining upper-bound and lower-bound for the scores. We show that these conditions can be approximated by proper loss functions, and in this regards we propose four losses that each will handle one of the condition.\nTo better comprehend Table-1, we have visualized the conditions in Figure 1. The condition (a) indicates a triple is positive if h + r = t holds. It means that the length of residual vector i.e. = h + r − t, is zero. It is the most strict condition that expresses being positive. Authors in (Sun et al., 2019; Kazemi & Poole, 2018) consider this condition to prove their theories as well as limitation of TransE in encoding of symmetric relations. However, the employed loss cannot approximate (a), rather it fulfills (c), resulting the reported limitation to be void in that setting.\nCondition (b) considers a triple to be positive if its residual vector lies on a hyper-sphere with radius γ1. It is less restrictive than (a) which considers a point to express being positive. The optimization problem that approximates the conditions (a) (γ1 = 0) and (b) (γ1 > 0) is as follows: minξh,t ∑ (h,r,t)∈S+ ξh,t 2 fr(h, t) = γ1, (h, r, t) ∈ S+ fr(h\n′, t′) ≥ γ2 − ξh,t, (h′, r, t′) ∈ S− ξh,t ≥ 0\n(1)\nwhere S+, S− are the set of positive and negative samples respectively. ξh,t are slack variables to reduce the effect of noise in negative samples (Nayyeri et al., 2019).\nOne loss function that approximates the conditions (a) and (b) is as follows. Please note that for case (a) we set γ1 = 0 and for case (b) we set γ1 > 0 in the formula.\nLa|b = ∑\n(h,r,t)∈S+\n( λ1‖fr(h, t)− γ1‖ + ∑ (h′,r,t′)∈S−\n(h,r,t)\nλ2 max(γ2 − fr(h′, t′), 0 ) . (2)\nCondition (c) considers a triple to be positive if its residual vector lies inside a hyper-sphere of radius γ1. The optimization problem that approximates the condition (c) is as follows (Nayyeri et al., 2019): minξh,t ∑ (h,r,t)∈S+ ξh,t 2 fr(h, t) ≤ γ1, (h, r, t) ∈ S+ fr(h\n′, t′) ≥ γ2 − ξh,t, (h′, r, t′) ∈ S− ξh,t ≥ 0\n(3)\nThe loss function that approximates the condition (c) is as follows Nayyeri et al. (2019): Lc = ∑\n(h,r,t)∈S+\n( λ1 max(fr(h, t)− γ1, 0) + ∑ (h′,r,t′)∈S−\n(h,r,t)\nλ2 max(γ2 − fr(h′, t′), 0) ) .\n(4)\nRemark: The loss function which is defined in Zhou et al. (2017b) is slightly different from the loss in 4. The former slides the margin while the latter fixes the margin by inclusion of a lower-bound for the score of negative triples. Both losses put an upper-bound for scores of positive triples.\nApart from the loss 4, the RotatE loss Sun et al. (2019) also approximates the condition (c). The formulation of the RotatE loss is as follows:\nLRotatEc = − ∑\n(h,r,t)∈S+\n( log σ(γ − fr(h, t)) + ∑ (h′,r,t′)∈S−\n(h,r,t)\nlog σ(fr(h ′, t′)− γ) ) .\nCondition (d) is similar to (c), but provides different γ1, γ2 for each triples. Using (d), there is not a unique region of truth for all positive triples, rather for each positive triple (h, r, t) and its corresponding negative of (h′, r, t′) there are triple-specific region of truth and falsity. Margin ranking loss (Bordes et al., 2013) approximates (d). Defining [x]+ = max(0, x), the loss is defined as:\nLd = ∑∑ [fr(h, t) + γ − fr(h′, t′)]+ . (5)\nTo investigate the limitations, we must assume that the relation vectors is not null otherwise we will have the same embedding for head and tail which is undesirable. Considering the conditions (a) to (d), we investigate the limitations L1 to L6 and we prove that existing theories are just valid under (a), which is not fulfilled under the given loss. In this regard we have the following theorem. For complete proofs, please refer to the appendix of the paper.\nTheorem T1. (Addressing L1): TransE and TransComplEx cannot infer a reflexive relation pattern with a non-zero relation vector under (a). However, under (b-d), TransE and TransComplEx can infer reflexive pattern.\nTheorem T2. (Addressing L2): (i) TransComplEx can infer a relation which is neither reflexive nor irreflexive under (b-d). (ii) TransE cannot infer a relation which is neither reflexive nor irreflexive under (a-d).\nTheorem T3. (Addressing L3): (i) TransComplEx can infer symmetric relations under (a-d). (ii) TransE cannot infer symmetric relations under (a) with non-zero vector for relation. (iii) TransE can infer a symmetric relation under (b-d).\nProof: Proofs of (i) and (ii) are provided in the appendix. For (iii) we have:\nUnder (b), for TransE we have ‖h + r− t‖ = γ1 and ‖t + r− h‖ = γ1. The necessity condition for encoding symmetric relation is ‖h+r−t‖ = ‖t+r−h‖. This implies ‖h‖ cos(θh,r) = ‖t‖ cos(θt,r). Let h− t = u, by definition we have ‖u + r‖ = γ1, ‖u− r‖ = γ1. Now let γ1 = α‖r‖, we have:{\n‖u‖2 + (1− α2)‖r‖2 = −2〈u, r〉 ‖u‖2 + (1− α2)‖r‖2 = 2〈u, r〉 (6)\nTherefore we have: ‖u‖2+(1−α2)‖r‖2 = −(‖u‖2+(1−α2)‖r‖2), which can be written as ‖u‖2 = (α2 − 1)‖r‖2. To avoid contradiction we must have α > 1. Once α > 1 we have cos(θu,r) = π/2. Therefore, TransE can encode symmetric relation with condition (b), when γ1 = α‖r‖ and α > 1. Figure 2 shows different conditions for encoding symmetric relation.\nConditions (c-d) are directly resulted from (b), as it is subsumed by (c) and (d). That completes the proof.\nTheorem T4. (Addressing L4): For both TransE and TransComplEx, (i) Limitation L4 holds under (a). (ii) Limitation L4 is not valid under (b-d).\nTheorem T5. (Addressing L5): For both TransE and TransComplEx, (i) Limitation L5 holds under (a). (ii) Limitation L5 holds is not valid under (b-d).\nTheorem T6. (Addressing L6): For both TransE and TransComplEx, (i) Limitation L6 holds under (a). (ii) Limitation L6 is not valid under (b-d)." }, { "heading": "4.3 ENCODING RELATION PATTERNS IN TRANSCOMPLEX", "text": "Most of KGE models learn from triples. Recent work incorporates relation patterns such as transitive, symmetric on the top of triples to further improve performance of models. For instance, ComplEx-NNE+AER (Ding et al., 2018) encodes implication relation in the ComplEx model. RUGE (Guo et al., 2018) injects First Order Horn Clause rules in an embedding model. SimplE\n(Kazemi & Poole, 2018) captures symmetric, antisymmetric and inverse relations by weight tying in the model. Inspired by (Minervini et al., 2017) and considering the score function of TransComplEx, in this part, we derive formulae for equivalence, symmetric, inverse and implication to be used as regularization terms in the optimization problem. Therefore, TransComplEx incorporates different relation patterns to optimize the embeddings.\nSymmetric: Assume that r is a symmetric relation. To encode it we should have fr(h, t) ≈ fr(t, h), therefore ‖fr(h, t)−fr(t, h)‖ = 0. According to the definition of score function of TransComplEx, we have the following algebraic formulae: RSym := ‖Re(h)−Re(t)‖ = 0. Using similar argument to symmetric, the following formulae are derived for transitive, composition, inverse and implication.\nEquivalence: Let p, q be equivalence relations, therefore we should have fp(h, t) ≈ fq(h, t). We obtain,REq := ‖p− q‖ = 0. Implication: Let p→ q, be the implication rule. We obtainRImp := max(fp(h, t)− fq(h, t), 0) = 0.\nInverse: Let r ←→ r−1 be the inverse relation. We obtainRInv := ‖r− r−1‖. Finally, the following optimization problem should be solved:\nmin θ L+\n∑ ηiRi (7)\nwhere θ is embedding parameters, L is one of the losses 2, 4 or 5 and Ri is one of the derived formulae mentioned above." }, { "heading": "5 EXPERIMENTS AND EVALUATIONS", "text": "In this section, we evaluate performance of our model, TransComplEx, with different loss functions on the link prediction task. The aim of the task is to complete the triple (h, r, ?) or (?, r, t) by prediction of the missed entity h or t. Filtered Mean Rank (MR), Mean Reciprocal Rank (MRR) and Hit@10 are used for evaluations (Wang et al., 2017; Lin et al., 2015b).\nDataset. We use two dataset extracted from Freebase (Bollacker et al., 2008) (i.e. FB15K (Bordes et al., 2013) and FB15K-237 (Toutanova & Chen, 2015)) and two others extracted from WordNet (Miller, 1995) (i.e. WN18 (Bordes et al., 2013) and WN18RR (Dettmers et al., 2018)). FB15K and WN18 are earlier dataset which have been extensively used to compare performance of KGEs. FB15K-237 and WN18RR are two dataset which are supposed to be more challenging after removing inverse patterns from FB15K and WN18. Guo et al. (2018) and Ding et al. (2018) extracted different relation patterns from FB15K and WN18 respectively. The relation patterns are provided by their confidence level, e.g. (a,BornIn, b) 0.9−−→ (a,Nationality, b). We drop the relation patterns with confidence level less than 0.8. Generally, we use 454 and 14 relation patterns for FB15K and WN18 respectively. We do grounding for symmetric and transitive relation patterns. Thanks to the formulation of score function, grounding is not needed for inverse, implication and equivalence.\nExperimental Setup. We implement TransComplEx with the losses 2, 4 and 5 and TransE with the loss 4 in PyTorch. Adagrad is used as an optimizer. We generate 100 mini-batches in each iteration. The hyperparameter corresponding to the score function is embedding dimension d. We add slack variables to the losses 2 and 4 to have soft margin as in (Nayyeri et al., 2019). The loss 4 is rewritten as follows Nayyeri et al. (2019):\nmin ξrh,t ∑ (h,r,t)∈S+ ( λ0 ξ r h,t 2 + λ1 max(fr(h, t)− γ1, 0)+\nλ2 ∑\n(h,r,t)∈S− h′,r,t′\nmax(γ2 − fr(h ′ , t ′ )− ξrh,t, 0) ) .\n(8)\nWe set λ1 and λ2 to one and search for the hyperparameters γ1(γ2 > γ1) and λ0 in the sets {0.1, 0.2, . . . , 2} and {0.01, 0.1, 1, 10, 100} respectively. Moreover, we generate α ∈ {1, 2, 5, 10}\nnegative samples per each positive. The embedding dimension and learning rate are tuned from the sets {100, 200}, {0.0001, 0.0005, 0.001, 0.005, 0.01} respectively. All hyperparameters are adjusted by early stopping on validation set according to MRR. RPTransComplEx# denotes the TransComplEx model which is trained by the loss function # (2, 4, 5). RP indicates that relation patterns are injected during learning by regularizing the derived formulae (see 7). TransComplEx# refers to our model trained with the loss # without regularizing relation patterns formulae. The same notation is used for TransE#. The optimal configurations for RPTransComplEx are provided in the appendix.\nResults. Table 2 presents comparison of TransComplEx and its relation pattern encoded variants (RPTransComplEx) with three classes of embedding models on the most famous two datasets of FB15K and WN18. The first category (CAT1) of models consist of translation-based model (e.g. TransX, TorusE). The second category (CAT2) are embedding models which are not translationbased (e.g. ConvE, ComplEx, ANALOGY). The previous two categories (CAT1/2) can learn relation patterns that are reflected in triples. In other words they learn patterns implicitly from existing triples. The last category (CAT3) consist of models which can explicitly encode (inject) rules in their training process (e.g. RUGE, ComplEx-NNE+AER, SimplE, SimplE+). The models in CAT3 are actually trained on relation patterns as well as triples.\nWe trained relation pattern version of our model i.e. RPTransComplEx using the losses of 2, 4, 5. All variants of RPTransComplEx, except PRTransComplEx4, are generally performing better than models in CAT3 which also explicitly inject relation patterns. The exception, PRTransComplEx5, is due to the fact that it uses margin ranking loss which we already showed its restriction regarding condition (d). In this regard, please consider histogram (d) in Figure 1.\nAs all limitations were mitigated by condition (c), we expected that RPTransComplEx4, which is associated to (c), should perform better than others. This is empirically approved as shown in Table-\n2. The question is that how our model behaves under the loss 4 while we additionally disregard injection of relation patterns. To this end, we considered TransComplEx4 which uses loss 4 but does not do any injection. Since the performances of RPTransComplEx4 and TransComplEx4 are very close, we conclude that the latter also learns the pattern over the existing triples quite efficiently. This is also confirmed by the convergence performance of them as shown in Figure 3 over the symmetric relation.\nTo have a fair comparison1 with the categories of models which disregard injection of relation patterns (CAT1/2) we used the TransComplEx4 version of our approach. As shown in Table-2, we can observe that TransComplEx4 performs better than other models in CAT1/2 considering MR and Hits@10 and performing very closely to the best models on MRR.\nAs discussed earlier, FB15K-237 and WN18RR are two more challenging dataset provided recently. Table 3 presents the comparisons of our models with those other models that their performance results were available on these two datasets. Similar to our previous discussion on Table-2, we observe that PRTransComplEx4 and TransComplEx4 are performing closely which means that the latter has learned the relation pattern over triples very well without any injection. On WN18RR, TorusE was performing better than TransComplEx4 due to big embedding dimension of 10,000. On FB15k-237, RPTransComplEx4 performed better that others on MRR and Hits@10.\nIn order to investigate the effect of grounding, we train RPTransComplEx4 in two settings: 1) RPTransComplEx4(w grounding) is trained when the grounded patterns are injected, 2) RPTransComplEx4(w/o grounding) is trained when the relation patterns which are not grounded used. According to the Table 4, the grounding does not affect the performance significantly. We conclude that the model properly learns the relation patterns even without injection.\nBoosting techniques: There are several ways to improve the performance of embedding models: 1) designing a more sophisticated scoring function, 2) proper selection of loss function, 3) using\n1Accordingly, we ran the RotatE code in our setting (embedding dimension 200 and 10 negative samples). The original paper used very big numbers of 1000 and 1000 respectively.\nmore negative samples 4) using negative sampling techniques, 5) enriching dataset (e.g. adding reverse triples). Among the mentioned techniques, we focus on the first and second ones and avoid using other techniques. We keep the setting used in (Trouillon et al., 2016) to have a fair comparison. Using other techniques can further improve the performance of every models including ours. For example, TransComplEx with embedding dimension 200 and 50 negative samples gets 52.2 for Hits@10. Further analyses of our models in a big setting (bigger embedding dimension and more negative samples) are provided in appendix. Still, loss 4 is performing better and we conclude that our theoretical framework is approved in empirical experiments." }, { "heading": "6 CONCLUSION", "text": "In this paper, we reinvestigated the main limitations of Translation-based embedding models from two aspects: score and loss. We showed that existing theories corresponding to the limitations of the models are inaccurate because the effect of loss functions has been ignored. Accordingly, we presented new theories about the limitations by consideration of the effect of score and loss functions. We proposed TransComplEx, a new variant of TransE which is proven to be less limited comparing to the TransE. The model is trained by using various loss functions on standard dataset including FB15K, FB15K-237, WN18 and WN18RR. According to the experiments, TransComplEx with proper loss function significantly outperformed translation-based embedding models. Moreover, TransComplEx got competitive performance comparing to the state-of-the-art embedding models while it is more efficient in time and memory. The experimental results conformed the presented theories corresponding to the limitations." }, { "heading": "A FURTHER EXPERIMENTS WITH A BIGGER SETTING", "text": "In this section we compare TransE, TransComplEx and RotatE trained by using the losses 2 (condition (a),(b)) , 4 (condition (c)) and the RotatE loss (condition (c)). In contrast to our previous experiments, we use a bigger setting: For FB15K-237, we set the embedding dimension to 300 and the number of the negative samples to 256, and for WN18RR, we set the embedding dimension and the number of negative samples to 300 and 250 respectively. We additionally use adversarial negative sampling technique used in (Sun et al., 2019) for all models.\nAnalysis of the results: Table 5 presents a comparison of TransE, TransComplEx and RotatE trained by different losses. TransE2(γ1 = 0) is trained by using the loss 2 when γ1 = 0. TransE2(γ1 > 0) refers to the TransE model which is trained by using the loss 2 when γ1 is a non-zero positive value. The TransE model which is trained by the losses 4 and the RotatE loss (i.e., LRotatEc ) are denoted by TransE4 and TransELRotatEc respectively. The similar notations are considered for TransComplEx and RotatE when they are trained by using different loss functions. The loss 2 with γ1 = 0 approximates the condition (a). The loss 2 with γ1 > 0 approximates the condition (b). The condition (c) can be approximated by using the loss 4 and the RotatE loss (i.e., LRotatEc ). However, the loss 4 provides a better separation for positive and the negative samples than the RotatE loss. According to the table 5, the loss 4 obtains a better performance than the other losses in each class of the studied models. It is consistent with our theories indicating that the condition (c) is less restrictive. Although we only investigated the main limitations of the translation-based class of embedding models, the theories can be generalized to different models including the RotatE model. From the table, we can see that the loss 4 improves the performance of RotatE. Regarding the table 5, the loss 2 (γ1 = 0) gets the worst results. It confirms our theories that with the condition (a), most of the limitations are held. However, with the condition (c), the limitations no longer exist. There have not been any losses that approximate the condition (a). However, most of the theories corresponding to the main limitations of the translation-based class of embedding models have been proven using the condition (a) while the used loss didn’t approximate the condition. Therefore, the theories and experimental justifications have not been accurate." }, { "heading": "B RELATION PATTERN CONVERGENCE ANALYSIS", "text": "Figure 4 visualizes the convergence curve of the inverse loss with (RPTransComplEx4) and without (TransComplEx4) injection when the models are trained on WN18. Figure 5 shows the convergence of the TransComplEx4 model trained on FB15K with and without the relation pattern injection. The\nfigures show that the models trained by using the loss 4 can properly encode the relation patterns even without any injection mechanism. In other words, the TransComplEx4 model can properly encode several relation patterns by only training on the triples (without using any additional relation pattern set to be injected). This shows the advantages of the models and the used losses." }, { "heading": "C PROOF OF THE THEOREM", "text": "The proof of theorems are provided as follows:\nProof of the Theorem T1 1) Let r be a reflexive relation and condition a) holds. For TransE, we have\nh + r− h = 0. (9)\nTherefore, the relation vector collapses to a null vector (r = 0). As a consequence of r = 0, embedding vectors of head and tail entities will be same which is undesired. Therefore, TransE cannot infer reflexive relation with r 6= 0. For TransComplEx, we have\nh + r− h̄ = 0. (10)\nWe have\nRe(r) = 0, Im(r) = −2Im(h). (11)\nTherefore, all entities will have same embedding vectors which is undesired.\n2) Using condition (b), we have\n‖h + r− t‖ = γ1. It gives ‖r‖ = γ1. Therefore, in order to infer reflexive relation, the length of the relation vector should be γ1. Consequently, TransE and TransComplEx can infer reflexive relation. The same procedure can be used for the conditions (c) and (d).\nProof of the theorem T2: i) Let the relation r be neither reflexive nor irreflexive and two triples (e1, r, e1), (e2, r, e2) be positive and negative respectively. Therefore the following inequalities hold:\n{ ‖e1 + r− ē1‖ ≤ λ1, ‖e2 + r− ē2‖ ≥ λ2.\n(12)\nEquation 12 is rewritten as follows:\n‖Re(r) + i(Im(r) + 2Im(e1))‖ ≤ γ1, ‖Re(r) + i(Im(r) + 2Im(e2))‖ ≥ γ2,\n(13)\nFor TransE in real space, ‖Re(r)‖ ≤ γ1 and ‖Re(r)‖ ≥ γ2 cannot be held simultaneously when γ2 > γ1. Therefore, TransE in real space cannot encode a relation which is neither reflexive nor irreflexive. In contrast, TransE in complex space can encode the relation by proper assignment of imaginary parts of entities. Therefore, theoretically TransComplEx can infer a relation which is neither reflexive nor irreflexive.\nProof of the theorem T3: i), ii) Let r be a symmetric relation and a) holds. We have\nh + r = t̄, t + r = h̄.\n(14)\nTrivially, we have\nRe(h) +Re(r) = Re(t), Re(t) +Re(r) = Re(h),\nIm(h) + Im(r) = −Im(t), Im(t) + Im(r) = −Im(h),\n(15)\nFor TransE in real space, there is\nRe(h) +Re(r) = Re(t), Re(t) +Re(r) = Re(h),\nTherefore, Re(r) = 0. It means that TransE cannot infer symmetric relations with condition a). For TransComplEx, additionally we have\nIm(h) + Im(r) = −Im(t), Im(t) + Im(r) = −Im(h),\nIt concludes Im(h) + Im(r) + Im(t) = 0. Therefore, TransE in complex space with condition a) can infer symmetric relation. Because a) is an special case of b) and c), TransComplEx can infer symmetric relations in all conditions.\n3) For TransE with condition b), there is\n‖h + r− t‖ = γ1, (16) ‖t + r− h‖ = γ1. (17)\nThe necessity condition for encoding symmetric relation is ‖h + r− t‖ = ‖t + r− h‖. This implies ‖h‖cos(θh,r) = ‖t‖cos(θt,r). Let h− t = u, by 17 we have ‖u + r‖ = γ1, ‖u− r‖ = γ1. Let γ1 = α‖r‖. We have {\n‖u‖2 + (1− α2)‖r‖2 = −2〈u, r〉 ‖u‖2 + (1− α2)‖r‖2 = 2〈u, r〉 (18)\nRegarding 18, we have\n‖u‖2 + (1− α2)‖r‖2 = −(|u‖2 + (1− α2)‖r‖2). → ‖u‖2 = (α2 − 1)‖r‖2. To avoid contradiction, α ≥ 1. If α ≥ 1 we have cos(θu,r) = π/2. Therefore, TransE can encode symmetric pattern with condition b), if γ1 = α‖r‖ and α ≥ 1. From the proof of condition b), we conclude that TransE can encode symmetric patterns under conditions c) and d).\nProof of the theorem T4: i) The proof of the lemma with condition a) for TransE is mentioned in the paper Kazemi & Poole (2018). For TransComplEx, the proof is trivial. ii) Now, we prove that the limitation L4 is not valid when b) holds.\nLet condition b) holds and relation r be reflexive, we have ‖e1 + r− e1‖ = γ1, ‖e2 + r− e2‖ = γ1. Let ‖e1 + r− e2‖ = γ1. To violate the limitation L4, the triple (e2, r, e1) should be negative i.e., ‖e2 + r− e1‖ > γ1, → ‖e2 + r− e1‖2 > γ21 , → ‖e2‖2 + ‖e1‖2 + ‖r‖2 + 2 < e2, r > −2 < e2, e1 > −2 < e1, r > > γ21 . Considering ‖e1 + r− e2‖ = γ1, we have < e2, r > − < e1, r > > 0, →< e2 − e1, r > > 0, → cos(θ(e2−e1),r) > 0, Therefore, the limitation L4 is not valid i.e., if a relation r is reflexive, it may not be symmetric. TransE is special case of TransComplEx and also condition b) is special case of condition c). Therefore using conditions b), c) and d), the limitation L4 is not valid for TransE and TransComplEx.\nProof of the theorem T5\ni) Under condition a), equation h + r − t = 0 holds. Therefore, according to the paper Kazemi & Poole (2018), the model has the limitation L5.\nii) If a relation is reflexive, with condition b), we have ‖e1 + r − e1‖ = γ1, ‖e2 + r − e2‖ = γ1. Therefore, ‖r‖ = λ1. Let {\n‖e1 + r− e2‖ = γ1, ‖e2 + r− e3‖ = γ1.\n(19)\nwe need to show the following inequality wouldn’t give contradiction: ‖e2 + r− e3‖ > γ1. From 19 we have < e2, (e1 + e2 + e3) >< 0, which is not contradiction.\nTherefore, with conditions b) and c), the limitation L5 is not valid for both TransE and TransComplEx.\nProof of the theorem T6: i) With condition (a), the limitation L6 is proved in Kazemi & Poole (2018). ii) Considering the assumption of L6 and the condition (b), we have\n ‖e1 + r− s1‖ = γ1, ‖e1 + r− s2‖ = γ1. ‖e2 + r− s1‖ = γ1.\n(20)\nWe show the condition that ‖e2 + r− s2‖ > γ1 holds. Substituting 20 in ‖e2 + r− s2‖ > γ1, we have cos(θ(s1−s2),(e1−e2)) < 0. Therefore, there are assignments to embeddings of entities that the limitation L6 is not valid with condition (b), (c) and (d).\nFigure 6 shows that the limitation L6 is invalid by proper selection of loss function.\nC.1 FURTHER LIMITATIONS AND FUTURE WORK\nIn the paper, we have investigated the six limitations of TransE which are resolved by revision of loss function. However, revision of loss functions can resolve further limitations including 1-N, N-1 and M-N relations. More concretely, setting upper-bound for the scores of positive samples can mitigate the M-N problem. We will leave it as future work.\nOur theories can be extended to every distance-based embedding models including RotatE etc.\nMoreover, the negative likelihood loss has been shown to be effective for training different embedding models including RotatE and TransE. This can also be explained by reformulation of negative\nlikelihood loss as standard optimization problem, showing the the loss put a boundary for the score functions.\nWe will consider the mentioned points as future work." }, { "heading": "D OPTIMAL HYPER-PARAMETERS", "text": "The following tables show the optimal configurations for our models included in the Table-2 and 3." } ]
2,019
null
SP:3d3842a5e0816084c5a2406f1b0143d0215b9559
[ "The authors propose a new gradient-based method (FAB) for constructing adversarial perturbations for deep neural networks. At a high level, the method repeatedly estimates the decision boundary based on the linearization of the classifier at a given point and projects to the closest \"misclassified\" example based on that estimation (similar to DeepFool). The authors build on this idea, proposing several improvements and evaluate their attack empirically against a variety of models.", "Authors extend deepFool by adding extra steps and constraints to find closer points to the source image as the adversarial image. They both project onto the decision boundary. Deepfool does and adhoc clipping to keep the pixel values in (0,1) but the new proposed method respects the constraints during the steps. Also during the steps they combine projection of last step result and original image to keep it closer to the original image. Moreover, at the end of the optimization they perform extra search steps to get closer to the original image. Also they add random restarts. Rather than considering the original image, they randomly choose an image in the half ballpark of the total delta." ]
The evaluation of robustness against adversarial manipulations of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the lp-norms for p ∈ {1, 2,∞} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similarly to state-of-the-art attacks which are partially specialized to one lp-norm.
[]
[ { "authors": [ "A. Athalye", "N. Carlini", "D.A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "O. Bastani", "Y. Ioannou", "L. Lampropoulos", "D. Vytiniotis", "A. Nori", "A. Criminisi" ], "title": "Measuring neural net robustness with constraints", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "W. Brendel", "J. Rauber", "M. Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "T.B. Brown", "D. Mané", "A. Roy", "M. Abadi", "J. Gilmer" ], "title": "Adversarial patch", "venue": "NeurIPS", "year": 2017 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In IEEE Symposium on Security and Privacy,", "year": 2017 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "P. Chen", "Y. Sharma", "H. Zhang", "J. Yi", "C. Hsieh" ], "title": "Ead: Elastic-net attacks to deep neural networks via adversarial examples", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "F. Croce", "J. Rauber", "M. Hein" ], "title": "Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks", "venue": null, "year": 1903 }, { "authors": [ "L. Engstrom", "B. Tran", "D. Tsipras", "L. Schmidt", "A. Madry" ], "title": "A rotation and a translation suffice: Fooling CNNs with simple transformations", "venue": "NeurIPS", "year": 2017 }, { "authors": [ "S. Gu", "L. Rigazio" ], "title": "Towards deep neural network architectures robust to adversarial examples", "venue": "In ICLR Workshop,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "M. Hein", "M. Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "R. Huang", "B. Xu", "D. Schuurmans", "C. Szepesvari" ], "title": "Learning with a strong adversary", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "G. Katz", "C. Barrett", "D. Dill", "K. Julian", "M. Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "A. Kurakin", "I.J. Goodfellow", "S. Bengio" ], "title": "Adversarial examples in the physical world", "venue": "In ICLR Workshop,", "year": 2017 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Valdu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "A. Modas", "S. Moosavi-Dezfooli", "P. Frossard" ], "title": "Sparsefool: a few pixels make a big difference", "venue": null, "year": 2019 }, { "authors": [ "S.-M. Moosavi-Dezfooli", "A. Fawzi", "P. Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "M. Mosbach", "M. Andriushchenko", "T. Trost", "M. Hein", "D. Klakow" ], "title": "Logit pairing methods can fool gradient-based attacks", "venue": "NeurIPS", "year": 2018 }, { "authors": [ "N. Narodytska", "S.P. Kasiviswanathan" ], "title": "Simple black-box adversarial perturbations for deep networks", "venue": "In CVPR 2017 Workshops,", "year": 2016 }, { "authors": [ "N. Papernot", "P. McDonald", "X. Wu", "S. Jha", "A. Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep networks", "venue": "In IEEE Symposium on Security & Privacy,", "year": 2016 }, { "authors": [ "N. Papernot", "N. Carlini", "I. Goodfellow", "R. Feinman", "F. Faghri", "A. Matyasko", "K. Hambardzumyan", "Y.-L. Juang", "A. Kurakin", "R. Sheatsley", "A. Garg", "Y.-C. Lin" ], "title": "cleverhans v2.0.0: an adversarial machine learning", "venue": "library. preprint,", "year": 2017 }, { "authors": [ "J. Rauber", "W. Brendel", "M. Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In ICML Reliable Machine Learning in the Wild Workshop,", "year": 2017 }, { "authors": [ "J. Su", "D.V. Vargas", "S. Kouichi" ], "title": "One pixel attack for fooling deep neural networks", "venue": null, "year": 2019 }, { "authors": [ "V. Tjeng", "K. Xiao", "R. Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "F. Tramèr", "D. Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": null, "year": 1904 }, { "authors": [ "D. Tsipras", "S. Santurkar", "L. Engstrom", "A. Turner", "A. Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "E. Wong", "F.R. Schmidt", "J.Z. Kolter" ], "title": "Wasserstein adversarial examples via projected sinkhorn iterations", "venue": null, "year": 1902 }, { "authors": [ "S. Zheng", "Y. Song", "T. Leung", "I.J. Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": null, "year": 2016 }, { "authors": [ "T. Zheng", "C. Chen", "K. Ren" ], "title": "Distributionally adversarial attack", "venue": "In AAAI,", "year": 2019 } ]
[ { "heading": "1 Introduction", "text": "The finding of the vulnerability of neural networks-based classifiers to adversarial examples, that is small perturbations of the input able to modify the decision of the models, started a fast development of a variety of attack algorithms. The high effectiveness of adversarial attacks reveals the fragility of these networks which questions their safe and reliable use in the real world, especially in safety critical applications. Many defenses have been proposed to fix this issue (Gu & Rigazio, 2015; Zheng et al., 2016; Papernot et al., 2016; Huang et al., 2016; Bastani et al., 2016; Madry et al., 2018), but with limited success, as new more powerful attacks showed (Carlini & Wagner, 2017b; Athalye et al., 2018; Mosbach et al., 2018). In order to trust the decision of a model, it is necessary to evaluate the exact adversarial robustness. Although this is possible for ReLU networks (Katz et al., 2017; Tjeng et al., 2019) these techniques do not scale to commonly used large networks. Thus, the robustness is evaluated approximating the solution of the minimal adversarial perturbation problem through adversarial attacks. One can distinguish attacks into black-box (Narodytska & Kasiviswanathan, 2016; Brendel et al., 2018; Su et al., 2019), where one is only allowed to query the classifier, and white-box attacks, where one has full control over the network, according to the attack model used to create adversarial examples (typically some lp-norm, but others have become popular as well, e.g. Brown et al. (2017); Engstrom et al. (2017); Wong et al.), whether they aim at the minimal adversarial perturbation (Carlini & Wagner, 2017a; Chen et al., 2018; Croce et al., 2019) or rather any perturbation below a threshold (Kurakin et al., 2017; Madry et al., 2018; Zheng et al., 2019), if they have lower (Moosavi-Dezfooli et al., 2016; Modas et al., 2019) or higher (Carlini & Wagner, 2017a; Croce et al., 2019) computational cost. Moreover, it is clear that due to the non-convexity of the problem there exists no universally best attack (apart from the exact methods), since this depends on runtime constraints, networks architecture, dataset, etc. However, our goal is to have an attack which performs well under a broad spectrum of conditions with minimal amount of hyperparameter tuning. In this paper we propose a new white-box attacking scheme which performs comparably or better than established attacks and has the following features: first, it tries to produce adversarial samples with minimal distortion compared to the original point, measured wrt the lp-norms with p ∈ {1, 2,∞}. Respect to the quite popular PGD-attack of Madry et al. (2018) this has the clear advantage that our method does not need to be restarted for every threshold if one wants to evaluate the success rate of the attack with perturbations constrained to be in {δ ∈ Rd | ‖δ‖p ≤ }. Thus it is particularly suitable to get a complete\npicture on the robustness of a classifier with low computational cost. Second, it achieves fast good quality in terms of average distortion or robust accuracy. At the same time we show that increasing the number of restarts keeps improving the results and makes it competitive with the strongest available attacks. Third, although it comes with a few parameters, these mostly generalize across datasets, architectures and norms considered, so that we have an almost off-the-shelf method. Most importantly, unlike PGD and other methods, there is no step size parameter which potentially has to be carefully adapted to every new network." }, { "heading": "2 FAB: a Fast Adaptive Boundary Attack", "text": "We first introduce minimal adversarial perturbations, then we recall the definition and properties of the projection wrt the lp-norms of a point on the intersection of a hyperplane and box constraints, as they are an essential part of our attack. Finally, we present our FAB-attack algorithm to generate minimally distorted adversarial examples." }, { "heading": "2.1 Minimal adversarial examples", "text": "Let f : Rd → RK be a classifier which assigns every input x ∈ Rd (with d the dimension of the input space) to one of the K classes according to arg max\nr=1,...,K fr(x). In many scenarios\nthe input of f has to satisfy a specific set of constraints C, e.g. images are represented as elements of [0, 1]d. Then, given a point x ∈ Rd with true class c, we define the minimal adversarial perturbation for x wrt the lp-norm as\nδmin,p = arg min δ∈Rd ‖δ‖p , s.th. max l 6=c fl(x+ δ) ≥ fc(x+ δ), x+ δ ∈ C. (1)\nThe optimization problem (1) is non-convex and NP-hard for non-trivial classifiers (Katz et al. (2017)) and, although for some classes of networks it can be formulated as a mixed-integer program (see Tjeng et al. (2019)), the computational cost of solving it is prohibitive for large, normally trained networks. Thus, δmin,p is usually approximated by an attack algorithm, which can be seen as a heuristic to solve (1). We will see in the experiments that current attacks sometimes drastically overestimate ‖δmin,p‖p and thus the robustness of the networks." }, { "heading": "2.2 Projection on a hyperplane with box constraints", "text": "Let w ∈ Rd and b ∈ R be the normal vector and the offset defining the hyperplane π : 〈w, x〉+ b = 0. Let x ∈ Rd, we denote by the box-constrained projection wrt the lp-norm of x on π (projection onto the intersection of the box C = {z ∈ Rd : li ≤ zi ≤ ui} and the hyperplane π) the following minimization problem:\nz∗ = arg min z∈Rd ‖z − x‖p s.th. 〈w, z〉+ b = 0, li ≤ zi ≤ ui, i = 1, . . . , d, (2)\nwhere li, ui ∈ R are lower and upper bounds on each component of z. For p ≥ 1 the optimization problem (2) is convex. Hein & Andriushchenko (2017) proved that for p ∈ {1, 2,∞} the solution can be obtained in O(d log d) time, that is the complexity of sorting a vector of d elements, as well as determining that it has no solution. Since this projection is part of our iterative scheme, we need to handle specifically the case of (2) being infeasible. In this case, defining ρ = sign(〈w, x〉+ b), we instead compute\nz′ = arg min z∈Rd ρ(〈w, z〉+ b) s.th. li ≤ zi ≤ ui, i = 1, . . . , d, (3)\nwhose solution is given componentwise, for every i = 1, . . . , d, by zi = li if ρwi > 0, ui if ρwi < 0, xi if wi = 0 . Assuming that the point x satisfies the box constraints (as it will be in our algorithm), this is equivalent to identifying the corner of the d-dimensional box defined by the componentwise constraints on z closest to the hyperplane π. Notice that if (2) is infeasible then the objective\nfunction of (3) stays positive and the points x and z are strictly contained in the same of the two halfspaces divided by π. Finally, we define the operator\nprojp : (x, π, C) 7−→ { z∗ if Problem (2) is feasible z′ else (4)\nyielding the point which gets as close as possible to π without violating the box constraints." }, { "heading": "2.3 FAB Attack", "text": "We introduce now our algorithm to produce minimally distorted adversarial examples, wrt any lp-norm for p ∈ {1, 2,∞}, for a given point xorig initially correctly classified by f as class c. The high-level idea is that we use the linearization of the classifier at the current iterate x(i), compute the box-constrained projections of x(i) respectively xorig onto the approximated decision hyperplane and take a convex combinations of these projections depending on the distance of x(i) and xorig to the decision hyperplane, followed by some extrapolation step. We explain below the geometric motivation behind these steps. The attack closest in spirit is DeepFool (Moosavi-Dezfooli et al. (2016)) which is known to be very fast but suffers from low quality. DeepFool just tries to find the decision boundary quickly but has no incentive to provide a solution close to xorig. Our scheme resolves this main problem and, together with the exact projection we use, leads to a principled way to track the decision boundary (the surface where the decision of f changes) close to xorig.\nIf f was a linear classifier then the closest point to x(i) on the decision hyperplane could be found in closed form. Although neural networks are highly non-linear, ReLU networks (neural networks which use ReLU as activation function) are piecewise affine functions and thus locally a linearization of the network is an exact description of the classifier. Let l 6= c, then the decision boundary between classes l and c can be locally approximated using a first order Taylor expansion at x(i) by the hyperplane\nπl(z) : fl(x(i))− fc(x(i)) + 〈 ∇fl(x(i))−∇fc(x(i)), z − x(i) 〉 = 0. (5)\nMoreover the lp-distance dp(π, x(i)) of x(i) to πl is given by\ndp(πl, x(i)) = |fl(x(i))− fc(x(i))|∥∥∇fl(x(i))−∇fc(x(i))∥∥q , with 1p + 1q = 1. (6)\nNote that if dp(πl, x(i)) = 0 then x(i) belongs to the true decision boundary. Moreover, if the local linear approximation of the network is correct then the class s with the decision hyperplane closest to the point x(i) can be computed as\ns = arg min l 6=c |fl(x(i))− fc(x(i))|∥∥∇fl(x(i))−∇fc(x(i))∥∥q . (7) Thus, given that the approximation holds in some large enough neighborhood, the projection projp(x(i), πs, C) of x(i) onto πs lies on the decision boundary (unless (2) is infeasible).\nBiased gradient step: The iterative algorithm x(i+1) = projp(x(i), πs, C) would be similar to DeepFool except that our projection operator is exact whereas they project onto the hyperplane and then clip to [0, 1]d. This scheme is not biased towards the original target point xorig, thus it goes typically further than necessary to find a point on the decision boundary as basically the algorithm does not aim at the minimal adversarial perturbation. Thus we consider additionally projp(xorig, πs, C) and use instead the iterative step, with x(0) = xorig, defined as\nx(i+1) = (1− α) · projp(x(i), πs, C) + α · projp(xorig, πs, C), (8)\nwhich biases the step towards xorig (see Figure 1). Note that this is a convex combination of two points on πs and in C and thus also x(i+1) lies on πs and is contained in C. As we wish\na scheme with minimal amount of parameters, we want to have an automatic selection of α based on the available geometric quantities. Let\nδ(i) = projp(x(i), πs, C)− x(i) and δ (i) orig = projp(xorig, πs, C)− xorig.\nNote that ∥∥δ(i)∥∥\np and ∥∥∥δ(i)orig∥∥∥ p are the distances of x(i) and xorig to πs (inside C). We\npropose to use for the parameter α the relative magnitude of these two distances, that is\nα = min ∥∥δ(i)∥∥ p∥∥δ(i)∥∥ p + ∥∥∥δ(i)orig∥∥∥\np\n, αmax ∈ [0, 1]. (9) The motivation for doing so is that if x(i) is close to the decision boundary, then we should stay close to this point (note that πs is the approximation of f computed at x(i) and thus it is valid in a small neighborhood of x(i), whereas xorig is farther away). On the other hand we want to have the bias towards xorig in order not to go too far away from xorig. This is why α depends on the distances of x(i) and xorig to πs but we limit it from above with αmax. Finally, we use a small extrapolation step as we noted empirically, similarly to Moosavi-Dezfooli et al. (2016), that this helps to cross faster the decision boundary and get an adversarial sample. This leads to the final scheme:\nx(i+1) = projC ( (1− α)(x(i) + ηδ(i)) + α(xorig + ηδ(i)orig) ) , (10)\nwhere α is chosen as in (9), η ≥ 1 and projC is just the projection onto the box which can be done by clipping. In Figure 1 we visualize the scheme: in black one can see the hyperplane πs and the vectors δ(i)orig and δ(i), in blue the step we would make going to the decision boundary with the DeepFool variant, while in red the actual step we have in our method. The green vector represents instead the bias towards the original point we introduce. On the left of Figure 1 we use η = 1, while on the right we use overshooting η > 1.\nInterpretation of projp(xorig, πs, C): The projection of the target point onto the intersection of πs and C is defined as\narg min z∈Rd ‖z − xorig‖p s.th. 〈w, z〉+ b = 0, li ≤ zi ≤ ui,\nNote that replacing z by x(i) + δ we can rewrite this as\narg min δ∈Rd ∥∥∥x(i) + δ − xorig∥∥∥ p s.th. 〈w, x+ δ〉+ b = 0, li ≤ xi + δi ≤ ui.\nThis can be interpreted as the minimization of the distance of the next iterate x(i) + δ to the target point xorig so that x(i) + δ lies on the intersection of the (approximate) decision hyperplane and the box C. This point of view on the projection projp(xorig, πs, C) again justifies using a convex combination of the two projections in our iterative scheme in (10).\nBackward step: The described scheme finds in a few iterations adversarial perturbations. However, we are interested in minimizing their norms. Thus, once we have a new point x(i+1), we check whether it is assigned by f to a class different from c. In this case, we apply\nx(i+1) = (1− β)xorig + βx(i+1), β ∈ (0, 1), (11) that is we go back towards xorig on the segment [x(i+1), xorig], effectively starting again the algorithm at a point which is quite close to the decision boundary. In this way, due to the bias of the method towards xorig we successively find adversarial perturbations of smaller norm, meaning that the algorithm tracks the decision boundary while getting closer to xorig.\nFinal search: Our scheme finds points close to the decision boundary but often they are slightly off as the linear approximation is not exact and we apply the extrapolation step with η > 1. Thus, after finishing Niter iterations of our algorithmic scheme, we perform a last, fast step to further improve the quality of the adversarial examples. Let xout be the closest point to xorig classified differently from c, say s 6= c, found with the iterative scheme. It holds that fs(xout)−fc(xout) > 0 and fs(xorig)−fc(xorig) < 0. This means that, assuming f continuous, there exists a point x∗ on the segment [xout, xorig] such that fs(x∗)− fc(x∗) = 0 and ‖x∗ − xorig‖p < ‖xout − xorig‖p. If f is linear\nx∗ = xout − fs(xout)− fc(xout)\nfs(xout)− fc(xout) + fs(xorig)− fc(xorig) (xout − xorig). (12)\nSince f is typically non-linear, but close to linear, we compute iteratively for a few steps\nxtemp = xout − fs(xout)− fc(xout)\nfs(xout)− fc(xout) + fs(xorig)− fc(xorig) (xout − xorig), (13)\neach time replacing in (13) xout with xtemp if fs(xtemp)− fc(xtemp) > 0 or xorig with xtemp if instead fs(xtemp)− fc(xtemp) < 0. With this kind of modified binary search one can find a better adversarial sample with the cost of a few forward passes of the network.\nRandom restarts: So far all the steps are deterministic. To improve the results, we introduce the option of random restarts, that is x(0) is randomly sampled in proximity of xorig instead of being xorig itself. Most attacks benefit from random restarts, e.g. Madry et al. (2018); Zheng et al. (2019), especially dealing with gradient-masking defenses (Mosbach et al. (2018)), as it allows a wider exploration of the input space. We choose to sample from the lp-sphere centered in the original point with radius half the lp-norm of the current best adversarial perturbation (or a given threshold if no adversarial example has been found yet).\nComputational cost: Our attack, in Algorithm 1, consists of two main operations: the computation of f and its gradients and solving the projection (2). We perform, for each iteration, a forward and a backward pass of the network in the gradient step and a forward pass in the backward step. The projection can be efficiently implemented to run in batches on the GPU and its complexity depends only on the input dimension. Thus, except for shallow models, its cost is much smaller than the passes through the network. We can approximate the computational cost of our algorithm by the total number of calls of the classifier\nNiter ×Nrestarts × (2× forward passes + 1× backward pass). (14)\nOne has to add the forward passes for the final search, fixed to 3, that happens just once." }, { "heading": "2.4 Comparison to DeepFool", "text": "The idea of exploiting the first order local approximation of the decision boundary is not novel but the basis of one of the first white-box adversarial attacks, DeepFool (DF) from Moosavi-Dezfooli et al. (2016). While DF and our FAB-attack share the strategy of using a linear approximation of the classifier and projecting on the decision hyperplanes, we want to point out many key differences: first, DF does not solve the projection (2) but its simpler version without box constraints, clipping afterwards. Second, their gradient step does not have any bias towards the original point, that is equivalent to α = 0 in (10). Third, DF does\nAlgorithm 1: FAB-attack Input : xorig original point, c original class, Nrestarts, Niter, αmax, β, η, , p Output : xout adversarial example\nnot have any backward step, final search or restart, as it stops as soon as a misclassified point is found (its goal is to provide quickly an adversarial perturbation of average quality). We perform an ablation study of the differences to DF in Figure 2, where we show the curves of the robust accuracy as a function of the threshold (lower is better). We present the results of DeepFool (blue) and FAB-attack with the following variations: αmax = 0.1 and no backward step (magenta), αmax = 0 (that is no bias in the gradient step) and no restarts (light green), αmax = 0.1 and no restarts (orange), αmax = 0 and 100 restarts (dark\ngreen) and αmax = 0.1 and 100 restarts, that is FAB-attack, (red). We can see how every addition we make to the original scheme of DeepFool contributes to the significantly improved performance of FAB-attack when compared to the original DeepFool." }, { "heading": "3 Experiments", "text": "Models: We run experiments on MNIST, CIFAR-10 (Krizhevsky et al.) and Restricted ImageNet (Tsipras et al. (2019)). For each dataset we consider a naturally trained model (plain) and two adversarially trained ones as in Madry et al. (2018), one to achieve robustness wrt the l∞-norm (l∞-AT) and the other wrt the l2-norm (l2-AT) (see A.1).\nAttacks: We compare the performances of FAB-attack to those of attacks representing the state-of-the-art in each norm: DeepFool (DF) (Moosavi-Dezfooli et al. (2016)), CarliniWagner l2-attack (CW) (Carlini & Wagner (2017a)), Linear Region l2-Attack (LRA) (Croce et al. (2019)), Projected Gradient Descent on the cross-entropy function (PGD) (Kurakin et al., 2017; Madry et al., 2018; Tramèr & Boneh, 2019), Distributionally Adversarial Attack (DAA) (Zheng et al. (2019)), SparseFool (SF) (Modas et al. (2019)), Elastic-net Attack (EAD) (Chen et al. (2018)). We use DF from Rauber et al. (2017), CW and EAD as in Papernot et al. (2017), DAA and LRA with the code from the original papers, while we reimplemented SF and PGD. For MNIST and CIFAR-10 we used DAA with 50 restarts, PGD and FAB with 100 restarts. For Restricted ImageNet, we used DAA, PGD and FAB with 10 restarts (for l1 we used 5 restarts, since both methods benefit from more iterations). Moreover, we could not use LRA since it hardly scales to such models and CW and EAD for compatibility issues between the implementations of attacks and models. See A.2 for more details e.g. regarding number of iterations and other hyperparameters.\nEvaluation metrics: The robust accuracy of a model at a threshold is defined as the classification accuracy (in percentage) the model achieves when an attack is allowed to change every input of the test set with perturbations of lp-norm smaller than in order to change the decision. Thus stronger attacks produce lower robust accuracies. For each model and dataset we fix five thresholds at which we compute the robust accuracy for each attack (we choose the thresholds to have values of the robust accuracy that cover the range between clean accuracy and 0). We evaluate the attacks through the following statistics: i) avg. rob. accuracy: the mean of all the values of robust accuracy given by the attack over all models and thresholds, ii) # best: how many times the attack achieves the lowest robust accuracy (it is the most effective), iii) avg. difference to best: for each model/threshold we compute the difference between the robust accuracy of the attack and the best one across all the attacks, then we average over all models/thresholds, iv) max difference to best: as \"avg. difference to best\", but with the maximum difference instead of the average one. In A.4 we report the average lp-norm of the adversarial perturbations given by the attacks.\nResults: We report the complete results in Tables 5 to 13 of the Appendix, while we summarize them in Tables 1 (MNIST and CIFAR-10 aggregated, as we used the same attacks) and 2 (Restricted ImageNet). Our FAB-attack achieves the best results in all statistics for every norm (with the only exception of \"max diff. to best\" in l∞) on MNIST+CIFAR-10, meaning that it is the most effective attack. In particular, while on l∞ the \"avg. robust accuracy\" of PGD is not far from that of FAB, the gap is large when considering l2 and l1. Interestingly, the second best attack, at least in terms of average robust accuracy, is different for every norm (PGD for l∞, LRA for l2, EAD for l1), which implies that FAB outperforms algorithms specialized in the individual norms. We also report the results of FAB-10, that is our attack with only 10 restarts, to show that FAB yields high quality results already with a low budget in terms of time/computational cost. In fact, FAB-10 has \"avg. robust accuracy\" better than or very close to that of the strongest versions of the other methods (see below for a runtime analysis, where one observes that FAB-10 is the fastest attack excluding DF and SF which however give much worse results). On Restricted ImageNet, FAB-attack gets the best results in all statistics for l1, while for l∞ and l2, although PGD performs often better, the difference in \"avg. robust accuracy\" is small, meaning that FAB performs mostly similarly to PGD.\nIn general, both average and maximum difference to best of FAB-attack are small for all the datasets and norms, implying that it does not suffer severe failures, which makes it an efficient, high quality technique to evaluate the robustness of classifiers for all lp-norms. Finally, we show in Table 4 that FAB-attack outperforms or matches the competitors in 16 out of 18 cases when comparing the average lp-norms of the generated adversarial perturbations.\nRuntime comparison: DF and SF are definitely much faster than the others as their primary goal is to find as soon as possible adversarial examples, without emphasis on minimizing their norms, while LRA is rather expensive as noted in the original paper. Below we report the runtimes (for 1000 points on MNIST and CIFAR-10, 50 on R-ImageNet) for the attacks as used in the experiments (if not specified otherwise, it includes all the restarts). For PGD and DAA this is the time for evaluating the robust accuracy at 5 thresholds, while for the other methods a single run is sufficient to compute all the statistics. MNIST: DAA-50 11736s, PGD-100 3825s for l∞/l2 and PGD-100 14106s for l1, CW 944s, EAD 606s, FAB-10 161s, FAB-100 1613s. CIFAR-10: DAA-50 11625s, PGD-100 31900s\nfor l∞/l2 and 70110s for l1, CW 3691s, EAD 3398s, FAB-10 1209s, FAB-100 12093s. RImageNet: DAA-10 6890s, PGD-10 4738s for l∞/l2 and PGD-5 24158s for l1, FAB-10 2268s for l∞/l2 and FAC-5 3146s for l1 (note that different numbers of restarts/iterations for l1 are used on R-ImageNet). PGD needs a forward and a backward pass of the network for each iteration. Thus it is given 1.5 times more iterations than FAB, so that overall they have same budget of passes (we assume here that forward and backward passes take the same amount of time). In Appendix B.2 we compare PGD-1 versus FAB-1 as a function of the number of passes (2 passes are one iteration of PGD, 3 passes are one iteration of FAB - the plots show 300 passes meaning 150 iterations of PGD and 100 iterations of FAB) so that the comparison is fair concerning runtime in order to compare the performance of PGD-1 and FAB-1 as a function of runtime. If one considers just the performance up to 20 passes (10 iterations PGD, 7 iterations FAB) then FAB outperforms PGD in 18 out of 27 cases. However, one also observes that there is no general superiority of one method. For both PGD-1 and FAB-1 there are cases where the method requires the full amount of 300 passes to get to a good performance whereas the other method achieves it with significantly less iterations/passes." }, { "heading": "4 Conclusion", "text": "In summary, our geometrically motivated FAB-attack outperforms in terms of runtime and on average in terms of quality all other high quality state-of-the-art attacks and can be used for all p-norms in p ∈ {1, 2,∞} which is not the case for most other methods." }, { "heading": "A Experiments", "text": "" }, { "heading": "A.1 Models", "text": "The plain and l∞-AT models on MNIST are those available at https://github.com/ MadryLab/mnist_challenge and consist of two convolutional and two fully-connected layers. The architecture of the CIFAR-10 models has 8 convolutional layers (with number of filters increasing from 96 to 384) and 2 dense layers, while on Restricted ImageNet we use the models (ResNet-50 He et al. (2016)) from Tsipras et al. (2019) and available at https://github.com/MadryLab/robust-features-code. The models on MNIST achieve the following clean accuracy: plain 98.7%, l∞-AT 98.5%, l2-AT 98.6%. The models on CIFAR-10 achieve the following clean accuracy: plain 89.2%, l∞-AT 79.4%, l2-AT 81.2%." }, { "heading": "A.2 Attacks", "text": "values of used for random restarts\nMNIST CIFAR-10 Restricted ImageNet\nplain l∞-AT l2-AT plain l∞-AT l2-AT plain l∞-AT l2-AT l∞ 0.15 0.3 0.3 0.0 0.02 0.02 0.02 0.08 0.08 l2 2.0 2.0 2.0 0.5 4.0 4.0 5.0 5.0 5.0 l1 40.0 40.0 40.0 10.0 10.0 10.0 100.0 250.0 250.0" }, { "heading": "A.3 Complete results", "text": "In Tables 5 to 13 we report the complete values of the robust accuracy, wrt either l∞, l2 or l1, computed by every attack, for 3 datasets, 3 models for each dataset, 5 thresholds for each model (135 evaluations overall)." }, { "heading": "A.4 Further results", "text": "In Table 4 we report the average lp-norm of the adversarial perturbations found by the different attacks, computed on the originally correctly classified points on which the attack is successful. Note that we cannot show this statistic for the attacks which do not minimize the distance of the adversarial example to the clean input (PGD and DAA). FAB-attack produces also in this metric the best results in most of the cases, being the best for every model when considering l∞ and l2, and the best in 4 out of 6 cases in l1 (lower values mean a stronger attack)." }, { "heading": "MNIST", "text": "" }, { "heading": "B Analysis of the attacks", "text": "" }, { "heading": "B.1 Choice of the step size of PGD", "text": "We here show the performance of PGD wrt l2 on MNIST and CIFAR-10 under different choices of the step size. In particular we focus here on the largest and the middle values chosen in the evaluation where the different stepsize choices have the largest impact. We report the robust accuracy for at each of the 150 iterations. We test step sizes /t for t ∈ {1, 2, 4, 10, 25, 75}. For each step size we run the attack 10 times with random initialization and show the run which achieves the lowest robust accuracy after 150 iterations. Note however that the behaviour of different runs varies minimally. In Figure 3 we show the results for the three models for MNIST and CIFAR-10 for two different choices of used in Section 3, with step size decreasing the blue becoming darker, while our chosen step size, that is /4, is highlighted in red. We see that it achieves in all the models best or close to best robust accuracy and is clearly the best on average." }, { "heading": "B.2 Evolution across iteration", "text": "We here want to compare the evolution of the robust accuracy across the iterations of a single run of PGD and FAB, that is PGD-1 and FAB-1 from Tables 5 to 13. Since PGD performs 1 forward and 1 backward pass for each iteration and FAB 2 forward passes and 1 backward pass, we rescale the robust accuracy so to compare the two methods when they have exploited the same number of passes of the network. Then 300 passes correspond to 150 iterations of PGD and to 100 of FAB. In Figures 4, 5 and 6 we show the evolution of robust accuracy for the different dataset, models and threat models (l∞, l2 and l1), computed at the threshold median among the five used in Tables 5 to 13." }, { "heading": "MNIST, l1", "text": "" }, { "heading": "MNIST, l2", "text": "" }, { "heading": "MNIST, l∞", "text": "CIFAR-10, l∞\nCIFAR-10, l2\nCIFAR-10, l1\nRestricted ImageNet, l∞\nRestricted ImageNet, l2\nRestricted ImageNet, l1" } ]
2,019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
SP:51a88b77450225e0f80f9fa25510fb4ea64463b2
[ "The authors present a model for time series which are represented as discrete events in continuous time and describe methods for doing parameter inference, future event prediction and entropy rate estimation for such processes. Their model is based on models for Bayesian Structure prediction where they add the temporal dimension in a rigorous way while solving several technical challenges in the interim. The writing style is lucid and the illustrations are helpful and high quality. ", "The paper focuses on the problem of modeling, predicting and estimating entropy information over continuous-time discrete event processes. Specifically, the paper leverages unifilar HSMM's for model inference and then uses the inferred states to make future predictions. The authors also use inferred model with previously developed techniques for estimating entropy rate. The authors describe the methods and provide the evidence of the effectiveness of their method with experiments on a synthetic dataset." ]
The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes. The methods rely on an extension of Bayesian structural inference that takes advantage of neural network’s universal approximation power. Based on experiments with simple synthetic data, these new methods seem to be competitive with state-ofthe-art methods for prediction and entropy rate estimation as long as the correct model is inferred.
[]
[ { "authors": [ "Evan Archer", "Il Memming Park", "Jonathan W Pillow" ], "title": "Bayesian entropy estimation for countable discrete distributions", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Dieter Arnold", "H-A Loeliger. On the information rate of binary-input channels with memory. In ICC" ], "title": "IEEE International Conference on Communications", "venue": "Conference Record (Cat. No. 01Ch37240), volume 9, pp. 2692–2695. IEEE, 2001.", "year": 2001 }, { "authors": [ "Gordon J Berman", "William Bialek", "Joshua W Shaevitz" ], "title": "Predictability and hierarchy in drosophila behavior", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "W. Bialek", "I. Nemenman", "N. Tishby" ], "title": "Complexity through nonextensivity", "venue": "Physica A,", "year": 2001 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Andrea Cavagna", "Irene Giardina", "Francesco Ginelli", "Thierry Mora", "Duccio Piovani", "Raffaele Tavarone", "Aleksandra M Walczak" ], "title": "Dynamical maximum entropy approach to flocking", "venue": "Physical Review E,", "year": 2014 }, { "authors": [ "Sebastian Egner", "Vladimir B Balakirsky", "Ludo Tolhuizen", "Stan Baggen", "Henk Hollmann" ], "title": "On the entropy rate of a hidden markov model", "venue": "In International Symposium onInformation Theory,", "year": 2004 }, { "authors": [ "Keinosuke Fukunaga", "L Hostetler" ], "title": "Optimization of k nearest neighbor density estimates", "venue": "IEEE Transactions on Information Theory,", "year": 1973 }, { "authors": [ "Robert J Geller" ], "title": "Earthquake prediction: a critical review", "venue": "Geophysical Journal International,", "year": 1997 }, { "authors": [ "Lyudmila Grigoryeva", "Juan-Pablo Ortega" ], "title": "Echo state networks are universal", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Kurt Hornik" ], "title": "Approximation capabilities of multilayer feedforward networks", "venue": "Neural networks,", "year": 1991 }, { "authors": [ "R.G. James", "C.J. Ellison", "J.P. Crutchfield" ], "title": "Anatomy of a bit: Information in a time series observation. CHAOS", "venue": "URL http://arxiv. org/abs/1105.2988", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Michael L Littman", "Richard S Sutton" ], "title": "Predictive representations of state", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Henrik Madsen" ], "title": "Time series analysis", "venue": "Chapman and Hall/CRC,", "year": 2007 }, { "authors": [ "Malik Magdon-Ismail", "Amir F Atiya" ], "title": "Neural networks for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "JS Marron" ], "title": "A comparison of cross-validation techniques in density estimation", "venue": "The Annals of Statistics,", "year": 1987 }, { "authors": [ "Sarah E Marzen", "James P Crutchfield" ], "title": "Structure and randomness of continuous-time, discreteevent processes", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "Ilya Nemenman", "Fariel Shafee", "William Bialek" ], "title": "Entropy and inference, revisited", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "D. Pfau", "N. Bartlett", "F. Wood" ], "title": "Probabilistic deterministic infinite automata", "venue": "Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "F. Rieke", "D. Warland", "R. de Ruyter van Steveninck", "W. Bialek" ], "title": "Spikes: Exploring the Neural Code", "venue": null, "year": 1999 }, { "authors": [ "C.C. Strelioff", "J.P. Crutchfield" ], "title": "Bayesian structural inference for hidden processes", "venue": "Phys. Rev. E,", "year": 2014 }, { "authors": [ "Jonathan D Victor" ], "title": "Binless strategies for estimation of information from neural data", "venue": "Physical Review E,", "year": 2002 }, { "authors": [ "Paul J Werbos" ], "title": "Backpropagation through time: what it does and how to do it", "venue": "Proceedings of the IEEE,", "year": 1990 } ]
[ { "heading": null, "text": "The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for inferring models, predicting future symbols, and estimating the entropy rate of continuous-time, discrete-event processes. The methods rely on an extension of Bayesian structural inference that takes advantage of neural network’s universal approximation power. Based on experiments with simple synthetic data, these new methods seem to be competitive with state-ofthe-art methods for prediction and entropy rate estimation as long as the correct model is inferred." }, { "heading": "1 INTRODUCTION", "text": "Much scientific data is dynamic, meaning that we see not a static image of a system but its time evolution. The additional richness of dynamic data should allow us to better understand the system, but we may not know how to process the richer data in a way that will yield new insight into the system in question. For example, we have records of when earthquakes have occurred, but still lack the ability to predict earthquakes well or estimate their intrinsic randomness (Geller, 1997); we know which neurons have spiked when, but lack an understanding of the neural code (Rieke et al., 1999); and finally, we can observe organisms, but have difficulty modeling their behavior (Berman et al., 2016; Cavagna et al., 2014). Such examples are not only continuous-time, but also discreteevent, meaning that the observations belong to a finite set (e.g, neuron spikes or is silent) and are not better-described as a collection of real numbers. These disparate scientific problems are begging for a unified framework for inferring expressive continuous-time, discrete-event models and for using those models to make predictions and, potentially, estimate the intrinsic randomness of the system.\nIn this paper, we present a step towards such a unified framework that takes advantage of: the inference and the predictive advantages of unifilarity– meaning that the hidden Markov model’s underlying state (the so-called “causal state” (Shalizi & Crutchfield, 2001) or “predictive state representation” (Littman & Sutton, 2002)) can be uniquely identified from the past data; and the universal approximation power of neural networks (Hornik, 1991). Indeed, one could view the proposed algorithm for model inference as the continuous-time extension of Bayesian structural inference Strelioff & Crutchfield (2014). We focus on time series that are discrete-event and inherently stochastic.\nIn particular, we infer the most likely unifilar hidden semi-Markov model (uhsMm) given data using the Bayesian information criterion. This class of models is slightly more powerful than semi-Markov models, in which the future symbol depends only on the prior symbol, but for which the dwell time of the next symbol is drawn from a non-exponential distribution. With unifilar hidden semi-Markov models, the probability of a future symbol depends on arbitrarily long pasts of prior symbols, and the dwell time distribution for that symbol is non-exponential. Beyond just model inference, we can use the inferred model and the closed-form expressions in Ref. (Marzen & Crutchfield, 2017) to estimate the process’ entropy rate, and we can use the inferred states of the uhsMm to predict future input via a k-nearest neighbors approach. We compare the latter two algorithms to reasonable extensions of state-of-the-art algorithms. Our new algorithms appear competitive as long as model inference is in-class, meaning that the true model producing the data is equivalent to one of the models in our search.\nIn Sec. 3, we introduce the reader to unifilar hidden semi-Markov models. In Sec. 4, we describe our new algorithms for model inference, entropy rate estimation, and time series prediction and test our algorithms on synthetic data that is memoryful. And in Sec. 5, we discuss potential extensions and applications of this research." }, { "heading": "2 RELATED WORK", "text": "There exist many methods for studying discrete-time processes. A classical technique is the autoregressive process, AR-k, in which the predicted symbol is a linear combination of previous symbols; a slight modification on this is the generalized linear model (GLM), in which the probability of a symbol is proportional to the exponential of a linear combination of previous symbols (Madsen, 2007). Previous workers have also used the Baum-Welch algorithm (Rabiner & Juang, 1986), Bayesian structural inference (Strelioff & Crutchfield, 2014), or a nonparametric extension of Bayesian structural inference (Pfau et al., 2010) to infer a hidden Markov model or probability distribution over hidden Markov models of the observed process; if the most likely state of the hidden Markov model is correctly inferred, one can use the model’s structure to predict the future symbol. More recently, recurrent neural networks and reservoir computers can be trained to recreate the output of any dynamical system through simple linear or logistic regression for reservoir computers (Grigoryeva & Ortega, 2018) or backpropagation through time for recurrent neural networks (Werbos et al., 1990).\nWhen it comes to continuous-time, discrete-event predictors, far less has been done. Most continuous-time data is, in fact, discrete-time data with a high time resolution; as such, one can essentially sample continuous-time, discrete-event data at high resolution and use any of the previously mentioned methods for predicting discrete-time data. Alternatively, one can represent continuoustime, discrete-event data as a list of dwell times and symbols and feed that data into either a recurrent neural network or feedforward neural network. We take a new approach: we infer continuous-time hidden Markov models (Marzen & Crutchfield, 2017) and predict using the model’s internal state as useful predictive features." }, { "heading": "3 BACKGROUND", "text": "We are given a sequence of symbols and durations of those symbols, . . . , (xi, τi), . . . , (x0, τ+0 ). This constitutes the data, D. For example, seismic time series are of this kind: magnitude and time between earthquakes. The last seen symbol x0 has been seen for a duration τ+0 . Had we observed the system for a longer amount of time, τ+0 may increase. The possible symbols {xi}i are assumed to belong to a finite set A, while the interevent intervals {τi}i are assumed to belong to (0,∞). We assume stationarity– that the statistics of {(xi, τi)}i are unchanging in time. Above is the description of the observed time series. What now follows is a shortened description of unifilar hidden semi-Markov models, notated M, that could be generating such a time series (Marzen & Crutchfield, 2017). The minimal such model that is consistent with the observations is the -Machine. Underlying a unifilar hidden semi-Markov model is a finite-state machine with states g, each equipped with a dwell-time distribution φg(τ), an emission probability p(x|g), and a function +(g, x) that specifies the next hidden state when given the current hidden state g and the current emission symbol x. This model generates a time series as follows: a hidden state g is randomly chosen; a dwell time τ is chosen according to the dwell-time distribution φg(τ); an emission symbol is chosen according to the conditional probability p(x|g); and we then observe the chosen x for τ amount of time. A new hidden state is determined via +(g, x), and we further restrict possible next emissions to be different than the previous emission– a property that makes this model unifilar– and the process repeats. See Fig. 1 for illustrations of a unifilar hidden semi-Markov model." }, { "heading": "4 ALGORITHMS", "text": "We investigate three tasks: model inference; calculation of the differential entropy rate; and development of a predictor of future symbols. Our main claim is that restricted attention to a special type of discrete-event, continuous-time model called unifilar hidden semi-Markov models makes all three tasks easier." }, { "heading": "4.1 INFERENCE OF UNIFILAR HIDDEN SEMI-MARKOV PROCESSES", "text": "The unifilar hidden semi-Markov models described earlier can be parameterized. Let M refer to a model– in this case, the underlying topology of the finite-state machine and neural networks defining the density of dwell times; let θ refer to the model’s parameters, i.e. the emission probabilities and the parameters of the neural networks; and let D refer to the data, i.e., the list of emitted symbols and dwell times. Ideally, to choose a model, we would do maximum a posteriori by calculating argmaxM Pr(M|D) and choose parameters of that model via maximum likelihood, argmaxθ Pr(D|θ,M). In the case of discrete-time unifilar hidden Markov models, Strelioff and Crutchfield (Strelioff & Crutchfield, 2014) described the Bayesian framework for inferring the best-fit model and parameters. More than that, Ref. (Strelioff & Crutchfield, 2014) calculated the posterior analytically, using the unifilarity property to ease the mathematical burden. Analytic calculations in continuous-time may be possible, but we leave that for a future endeavor. We instead turn to a variety of approximations, still aided by the unifilarity of the inferred models.\nThe main such approximation is our use of the Bayesian inference criterion (BIC) Bishop (2006). Maximum a posteriori is performed via\nBIC = max θ logPr(D|θ,M)− kM 2\nlog |D| (1) M∗ = argmax\nM BIC, (2)\nwhere kM is the number of parameters θ. To choose a model, then, we must calculate not only the parameters θ that maximize the log likelihood, but the log likelihood itself. We make one further approximation for tractability involving the start state s0, for which\nPr(D|θ,M) = ∑ s0 π(s0|θ,M)Pr(D|s0, θ,M). (3)\nAs the logarithm of a sum has no easy expression, we approximate\nmax θ logPr(D|θ,M) = max s0 max θ logPr(D|s0, θ,M). (4)\nOur strategy, then, is to choose the parameters θ that maximize maxs0 logPr(D|s0, θ,M) and to choose the modelM that maximizes maxθ logPr(D|θ,M) − kM2 log |D|. This constitutes an inference of a model that can explain the observed data.\nWhat remains to be done, therefore, is approximation of maxs0 maxθ logPr(D|s0, θ,M). The parameters θ of any given model include p(s′, x|s), the probability of emitting x when in state s and transitioning to state s′, and φs(t), the interevent interval distribution of state s. Using the unifilarity of the underlying model, the sequence of x’s when combined with the start state s0 translate into a single possible sequence of hidden states si. As such, one can show that\nlogPr(D|s0, θ,M) = ∑ s ∑ j log φs(τ (s) j )\n+ ∑ s,x,s′ n(s′, x|s) log p(s′, x|s) (5)\nwhere τ (s)j is any interevent interval produced when in state s. It is relatively easy to analytically maximize with respect to p(s′, x|s), including the constraint that ∑s′,x p(s′, x|s) = 1 for any s, and find that\np∗(s′, x|s) = n(s ′, x|s) n(s) . (6)\nNow we turn to approximation of the dwell-time distributions, φs(t). The dwell-time distribution can, in theory, be any normalized nonnegative function; inference may seem impossible. However, artificial neural networks can, with enough nodes, represent any continuous function. We therefore represent φs(t) by a relatively shallow (here, three-layer) artificial neural network (ANN) in which nonnegativity and normalization are enforced as follows:\n• the second-to-last layer’s activation functions are ReLus (max(0, x), and so with nonnegative output) and the weights to the last layer are constrained to be nonnegative;\n• and the output is the last layer’s output divided by a numerical integration of the last layer’s output.\nThe log likelihood ∑ j log φs(τ (s) j ) determines the cost function for the neural network. Then, the neural network can be trained using typical stochastic optimization methods. (Here, we use Adam Kingma & Ba (2014).) The output of the neural network can successfully estimate the interevent interval density function, given enough samples, within the interval for which there is data. See Fig. 2. Outside this interval, however, the estimated density function is not guaranteed to vanish as t → ∞, and can even grow. Stated differently, the neural networks considered here are good interpolators, but can be bad extrapolators. As such, the density function estimated by the network is taken to be 0 outside the interval for which there is data.\nTo the best of our knowledge, this is a new approach to density estimation, referred to as ANN here. A previous approach to density estimation using neural networks learned the cumulative distribution function (Magdon-Ismail & Atiya, 1999). Typical approaches to density estimation include k-nearest neighbor estimation techniques and Parzen window estimates, both of which need careful tuning of hyperparameters (k or h) (Bishop, 2006). They are referred to here as kNN and Parzen. We compare the ANN, kNN, and Parzen approaches in inferring an interevent interval density function that we have chosen, arbitrarily, to be the mixture of inverse Gaussians shown in Fig. 2(left). The k in k-nearest neighbor estimation is chosen according to the criterion in Ref. Fukunaga & Hostetler (1973), and h is chosen to as to maximize the pseudolikelihood Marron et al. (1987). Note that as shown in Fig. 2(right), this is not a superior approach to density estimation in terms of minimization of mean-squared error, but it is parametric, so that BIC can be used.\nTo test our new method for density estimation presented here– that is, by training a properly normalized ANN– we generated a trajectory from the unifilar hidden semi-Markov model shown in Fig. 3(left) and used BIC to infer the correct model. As BIC is a log likelihood minus a penalty for a larger number of parameters, a larger BIC suggests a higher posterior. With very little data, a two-state model shown in Fig. 3 is deemed most likely; but as the amount of data increases, the correct four-state model eventually takes precedence. See Fig. 3(right). The six-state model was never deemed more likely than a two-state or four-state model. Note that although this methodol-\nogy might be extended to nonunifilar hidden semi-Markov models, the unifilarity allowed for easily computable and unique identification of dwell times to states in Eq. 5." }, { "heading": "4.2 IMPROVED CALCULATION OF DIFFERENTIAL ENTROPY RATES", "text": "One benefit of unifilar hidden semi-Markov models is that one can use them to obtain explicit formulae for the differential entropy rate (Marzen & Crutchfield, 2017). Such entropy rates are a measure of the inherent randomness of a process (Crutchfield & Feldman, 2003), and many have tried to find better algorithms for calculating entropy rates of complex processes (Egner et al., 2004; Arnold & Loeliger, 2001; Nemenman et al., 2002; Archer et al., 2014). Setting aside the issue for now of why one would want to estimate the entropy rate, we simply ask how well one can estimate the entropy rate from finite data.\nDifferential entropy rates are difficult to calculate directly from data, since the usual program involves calculating the entropy of trajectories of some length T and dividing by T :\nhµ = lim T→∞\nH[ −−−→ (x, τ) T ]\nT . (7)\nA better estimator, though, is the following (Crutchfield & Feldman, 2003):\nhµ = lim T→∞\ndH[ −−−→ (x, τ) T ]\ndT , (8)\nor the slope of the graph ofH[ −−−→ (x, τ) T ] vs. T . As the entropy of a mixed random variable of unknown dimension, this entropy is seemingly difficult to estimate from data. To calculateH[ −−−→ (x, τ) T ], we use a trick like that of Ref. Victor (2002) and condition on the number of events N :\nH[ −−−→ (x, τ) T ] = H[N ] +H[ −−−→ (x, τ)\nT |N ]. (9) We then further break the entropy into its discrete and continuous components:\nH[ −−−→ (x, τ)\nT |N = n] = H[x0:n|N = n] +H[τ0:n|x0:n, N = n] (10) and use the k-nearest-neighbor entropy estimator (Kraskov et al., 2004) to estimate H[τ0:n|x0:n, N = n], with k chosen to be 3. We estimate both H[x0:n|N = n] and H[N ] using plug-in entropy estimators, as the state space is relatively well-sampled. We call this estimator model-free, in that we need not infer a model to calculate the estimate.\nWe introduce a model-based estimator, for which we infer a model and then use the inferred model’s differential entropy rate as the differential entropy rate estimate. To calculate the differential entropy\nrate from the inferred model, we use a plug-in estimator based on the formula in Ref. (Marzen & Crutchfield, 2017):\nĥµ = − ∑ s p̂(s) ∫ ∞ 0 µ̂sφ̂s(t) log φ̂s(t)dt, (11)\nwhere the sum is over internal states of the model. The parameter µs is merely the mean interevent interval out of state s, ∫∞ 0 tφ̂s(t)dt. We find the distribution over internal states s, p̂(s), by solving the linear equations (Marzen & Crutchfield, 2017)\np(s) = ∑ s′ µs′ µs ns′→s ns′ p(s′). (12)\nWe use the MAP estimate of the model as described previously and estimate the interevent interval density functions φs(t) using a Parzen window estimate, with smoothing parameter h chosen so as to maximize the pseudolikelihood (Marron et al., 1987), given that those proved to have lower mean-squared error than the neural network density estimation technique in the previous subsection. In other words, we use neural network density estimation technique to choose the model, but once the model is chosen, we use the Parzen window estimates to estimate the density for purposes of estimating entropy rate. A full mathematical analysis of the bias and variance is beyond the scope of this paper.\nFig. 4 shows a comparison between the model-free method (k-nearest neighbor estimator of entropy) and the model-based method (estimation using the inferred model and Eq. 11) as a function of the length of trajectories simulated for the model. In Fig. 4(Top), the most likely (two-state) model is used for the model-based plug-in estimator of Eq. 11, as ascertained by the procedure in the previous subsection; but in Fig. 4(Bottom), the correct four-state model is used for the plug-in estimator. Hence, the estimate in Eq. 11 is based on the wrong model, and hence, leads to a systematic overestimate of the entropy rate. When the correct four-state model is used for the plug-in estimator in Fig. 4(Bottom), the model-based estimator has much lower variance than the model-free method.\nTo efficiently estimate the excess entropy (Crutchfield & Feldman, 2003; Bialek et al., 2001b;a), an additional important informational measure, requires models of the time-reversed process. Future research will elucidate the needed retrodictive representations of unifilar hidden semi-Markov models, which can be determined from the ?forward” unifilar hidden semi-Markov models." }, { "heading": "4.3 IMPROVED PREDICTORS USING THE INFERRED CAUSAL STATES", "text": "There are a wide array of techniques developed for discrete-time prediction, as described earlier in the manuscript. We can develop continuous-time techniques that build on these discrete-time techniques, e.g. by using dwell times and symbols as inputs to a RNN. However, based on the experiments shown here, we seem to gain by first identifying continuous-time causal states.\nThe first prediction method we call “predictive ANN” or PANN (with risk of confusion with the ANN method for density estimation described earlier) takes, as input, (x−n+1, τ−n+1), . . . , (x0, τ+0 ) into a feedforward neural network that is relatively shallow (six layers) and somewhat thin (25 nodes). Other network architectures were tried with little improvement. The weights of the network are trained to predict the emitted value x at time T later based on a mean-squared error loss function. For this method to work, the neural network must guess the hidden state g from the observed data, which can be done if the dwell-time distributions of the various states are dissimilar. Increases in n can increase the ability of the neural network to correctly guess its hidden state and thus predict future symbols, assuming enough data to avoid overfitting; here, n is chosen via cross-validation.\nThe second of these methods that we will label as “RNN” will take (x−n+1, τ−n+1), . . . , (x0, τ+0 ) as input to an LSTM, though any RNN could have been chosen. The LSTM is asked to produce an estimate of x at time T subject to a mean-squared error loss function, similar to the first prediction method.\nThe third of these methods that we will label as “uhsMm” preprocesses the input data using an inferred unifilar hidden semi-Markov model so that each time step is associated with a hidden state g, a time since last symbol change τ+0 , and a current emitted symbol x0. In discrete-time applications, there is an explicit formula for the optimal predictor in terms of the -M; but for continuous-time applications, there is no such formula, and so we use a k-nearest neighbor estimate. More precisely,\nwe find the k closest data points in the training data to the data point under consideration, and estimate xT as the average of the future data points in the training set. In the limit of infinite data so that the correct model is identified, for correctly-chosen k, this method will output an optimal predictor; we choose k via cross-validation.\nThe synthetic dataset is generated from Fig. 3(top) with φA(t) = φD(t) as inverse Gaussians with mean 1 and scale 5 and with φB(t) = φC(t) as inverse Gaussians with mean 3 and scale 2. We chose these means and scales so that it would be easier, in principle, for the non-uhsMm methods (i.e., PANN and RNN) to implicitly infer the hidden state (A, B, C, and D). Given the difference in dwell time distributions for each of the hidden states, such implicit inference is necessary for accurate predictions. From experiments, shown in Fig. 5, the feedforward neural network and the recurrent neural network are typically outperformed by the uhsMm method. The corresponding mean-squared errors for the three methods are shown in Fig. 3(bottom) for two different dataset sizes. Different network architectures, learning rates, and number of epochs were tried; the results shown in Fig. 5 are typical. Using a k-nearest neighbor estimate on the causal states (i.e., the internal state of the uhsMm) to predict the future symbol requires little hyperparameter tuning and outperforms compute-intensive feedforward and recurrent neural network approaches." }, { "heading": "5 DISCUSSION", "text": "We have introduced a new algorithm for inferring causal states (Shalizi & Crutchfield, 2001) of a continuous-time, discrete-event process using the groundwork of Ref. (Marzen & Crutchfield, 2017). We have introduced a new estimator of entropy rate that uses the causal states. And finally, we have shown that a predictor based on causal states is more accurate and less compute-heavy than other predictors.\nThe new inference, estimation, and prediction algorithms could be used to infer a predictive model of complex continuous-time, discrete-event processes, such as animal behavior, and calculate estimates of the intrinsic randomness of such complex processes. Future research could delve into improving estimators of other time series information measures (James et al., 2011), using something more accurate than BIC to calculate MAP models, or enumerating the topology of all possible uhsMm models for non-binary alphabets (Johnson et al.)." }, { "heading": "ACKNOWLEDGMENTS", "text": "This material is based upon work supported by, or in part by, the U. S. Army Research Laboratory and the U. S. Army Research Office under contracts W911NF-13-1-0390 and W911NF-18-1-0028 and the Moore Foundation." } ]
2,019
null
SP:06bbc70edab65f046adb46bc364c3b91f5880845
[ "This paper proposes to leverage the between-node-path information into the inference of conventional graph neural network methods. Specifically, the proposed method treats the nodes in training set as a reference corpus and, when infering the label of a specific node, makes this node \"attend\" to the reference corpus, where the \"attention\" weights are calculated based on the node representations and the between-node paths. (The paper used different terms about the \"attention\".)", "This paper presents a semi-supervised classification method for classifying unlabeled nodes in graph data. The authors propose a Graph Inference Learning (GIL) framework to learn node labels on graph topology. The node labeling is based of three aspects: 1) node representation to measure the similarity between the centralized subgraph around the unlabeled node and reference node; 2) structure relation that measures the similarity between node attributes; and 3) the reachability between unlabeled query node and reference node." ]
In this work, we address semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem via advanced graph convolution in a conventionally supervised manner, but the performance could degrade significantly when labeled data is scarce. To this end, we propose a Graph Inference Learning (GIL) framework to boost the performance of semisupervised node classification by learning the inference of node labels on graph topology. To bridge the connection between two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths, and local topological structures together, which can make the inference conveniently deduced from one node to another node. For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted to testing nodes. Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed, and NELL) demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods on the semi-supervised node classification task.
[ { "affiliations": [], "name": "Chunyan Xu" }, { "affiliations": [], "name": "Zhen Cui" }, { "affiliations": [], "name": "Xiaobin Hong" }, { "affiliations": [], "name": "Tong Zhang" }, { "affiliations": [], "name": "Jian Yang" }, { "affiliations": [], "name": "Wei Liu" } ]
[ { "authors": [ "Sami Abu-El-Haija", "Amol Kapoor", "Bryan Perozzi", "Joonseok Lee" ], "title": "N-gcn: Multi-scale graph convolution for semi-supervised node classification", "venue": "arXiv preprint arXiv:1802.08888,", "year": 2018 }, { "authors": [ "James Atwood", "Don Towsley" ], "title": "Diffusion-convolutional neural networks", "venue": "In NeurIPS, pp", "year": 2001 }, { "authors": [ "Karsten M Borgwardt", "Hans-Peter Kriegel", "SVN Vishwanathan", "Nicol N Schraudolph" ], "title": "Graph kernels for disease outcome prediction from protein-protein interaction networks", "venue": "Pacific Symposium on Biocomputing Pacific Symposium on Biocomputing,", "year": 2007 }, { "authors": [ "Ulrik Brandes", "Daniel Delling", "Marco Gaertler", "Robert Gorke", "Martin Hoefer", "Zoran Nikoloski", "Dorothea Wagner" ], "title": "On modularity clustering", "venue": "IEEE transactions on knowledge and data engineering,", "year": 2008 }, { "authors": [ "Andrew Carlson", "Justin Betteridge", "Bryan Kisiel", "Burr Settles", "Estevam R. Hruschka Jr.", "Tom M. Mitchell" ], "title": "Toward an architecture for never-ending language learning", "venue": "In AAAI,", "year": 2010 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Jian Du", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Soummya Kar" ], "title": "Topology adaptive graph convolutional networks", "venue": "arXiv preprint arXiv:1710.10370,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "arXiv preprint arXiv:1506.05163,", "year": 2015 }, { "authors": [ "Jiatao Jiang", "Zhen Cui", "Chunyan Xu", "Jian Yang" ], "title": "Gaussian-induced convolution for graphs", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Hisashi Kashima", "Koji Tsuda", "Akihiro Inokuchi" ], "title": "Marginalized kernels between labeled graphs", "venue": "In ICML, pp", "year": 2003 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": null, "year": 2016 }, { "authors": [ "Wei Liu", "Junfeng He", "Shih-Fu Chang" ], "title": "Large graph construction for scalable semi-supervised learning", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Wei Liu", "Jun Wang", "Shih-Fu Chang" ], "title": "Robust and scalable graph-based semisupervised learning", "venue": "Proceedings of the IEEE,", "year": 2012 }, { "authors": [ "Zhiling Luo", "Ling Liu", "Jianwei Yin", "Ying Li", "Zhaohui Wu" ], "title": "Deep learning of graphs with ngram convolutional neural networks", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Christopher Morris", "Kristian Kersting", "Petra Mutzel" ], "title": "Glocalized weisfeiler-lehman graph kernels: Global-local feature maps of graphs", "venue": "In ICDM,", "year": 2017 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi" ], "title": "Shift aggregate extract networks", "venue": "arXiv preprint arXiv:1703.05537,", "year": 2017 }, { "authors": [ "Lawrence Page", "Sergey Brin", "Rajeev Motwani", "Terry Winograd" ], "title": "The pagerank citation ranking: Bringing order to the web", "venue": "Technical Report 1999-66,", "year": 1999 }, { "authors": [ "Shirui Pan", "Ruiqi Hu", "Guodong Long", "Jing Jiang", "Lina Yao", "Chengqi Zhang" ], "title": "Adversarially regularized graph autoencoder for graph embedding", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Flood Sung", "Li Zhang", "Tao Xiang", "Timothy Hospedales", "Yongxin Yang" ], "title": "Learning to learn: Meta-critic networks for sample efficient learning", "venue": "arXiv preprint arXiv:1706.09529,", "year": 2017 }, { "authors": [ "Kiran K Thekumparampil", "Chong Wang", "Sewoong Oh", "Li-Jia Li" ], "title": "Attention-based graph neural network for semi-supervised learning", "venue": "arXiv preprint arXiv:1803.03735,", "year": 2018 }, { "authors": [ "Danfei Xu", "Yuke Zhu", "Christopher B Choy", "Li Fei-Fei" ], "title": "Scene graph generation by iterative message passing", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Pinar Yanardag", "SVN Vishwanathan" ], "title": "Deep graph kernels", "venue": "In SIGKDD, pp", "year": 2015 }, { "authors": [ "Zhilin Yang", "William W Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": null, "year": 2016 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Tong Zhang", "Zhen Cui", "Chunyan Xu", "Wenming Zheng", "Jian Yang" ], "title": "Variational pathway reasoning for eeg emotion recognition", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Wenting Zhao", "Zhen Cui", "Chunyan Xu", "Chengzheng Li", "Tong Zhang", "Jian Yang" ], "title": "Hashing graph convolution for node classification", "venue": null, "year": 2019 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In NeurIPS,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani", "John D Lafferty" ], "title": "Semi-supervised learning using gaussian fields and harmonic functions", "venue": "In ICML, pp", "year": 2003 }, { "authors": [ "Chenyi Zhuang", "Qiang Ma" ], "title": "Dual graph convolutional networks for graph-based semi-supervised classification", "venue": "In WWW,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph, which comprises a set of vertices/nodes together with connected edges, is a formal structural representation of non-regular data. Due to the strong representation ability, it accommodates many potential applications, e.g., social network (Orsini et al., 2017), world wide data (Page et al., 1999), knowledge graph (Xu et al., 2017), and protein-interaction network (Borgwardt et al., 2007). Among these, semi-supervised node classification on graphs is one of the most interesting also popular topics. Given a graph in which some nodes are labeled, the aim of semi-supervised classification is to infer the categories of those remaining unlabeled nodes by using various priors of the graph.\nWhile there have been numerous previous works (Brandes et al., 2008; Zhou et al., 2004; Zhu et al., 2003; Yang et al., 2016; Zhao et al., 2019) devoted to semi-supervised node classification based on explicit graph Laplacian regularizations, it is hard to efficiently boost the performance of label prediction due to the strict assumption that connected nodes are likely to share the same label information. With the progress of deep learning on grid-shaped images/videos (He et al., 2016), a few of graph convolutional neural networks (CNN) based methods, including spectral (Kipf & Welling, 2017) and spatial methods (Niepert et al., 2016; Pan et al., 2018; Yu et al., 2018), have been proposed to learn local convolution filters on graphs in order to extract more discriminative node representations. Although graph CNN based methods have achieved considerable capabilities of graph embedding by optimizing filters, they are limited into a conventionally semi-supervised framework and lack of an efficient inference mechanism on graphs. Especially, in the case of few-shot learning, where a small number of training nodes are labeled, this kind of methods would drastically compromise the performance. For example, the Pubmed graph dataset (Sen et al., 2008) consists ∗Corresponding author: Zhen Cui.\n(b) The process of Graph inference learning. We extract the local representation from the local subgraph (the circle with dashed line The red wave line denote the node reachability from to . d t th h bilit f d t th d\nof 19,717 nodes and 44,338 edges, but only 0.3% nodes are labeled for the semi-supervised node classification task. These aforementioned works usually boil down to a general classification task, where the model is learnt on a training set and selected by checking a validation set. However, they do not put great efforts on how to learn to infer from one node to another node on a topological graph, especially in the few-shot regime.\nIn this paper, we propose a graph inference learning (GIL) framework to teach the model itself to adaptively infer from reference labeled nodes to those query unlabeled nodes, and finally boost the performance of semi-supervised node classification in the case of a few number of labeled samples. Given an input graph, GIL attempts to infer the unlabeled nodes from those observed nodes by building between-node relations. The between-node relations are structured as the integration of node attributes, connection paths, and graph topological structures. It means that the similarity between two nodes is decided from three aspects: the consistency of node attributes, the consistency of local topological structures, and the between-node path reachability, as shown in Fig. 1. The local structures anchored around each node as well as the attributes of nodes therein are jointly encoded with graph convolution (Defferrard et al., 2016) for the sake of high-level feature extraction. For the between-node path reachability, we adopt the random walk algorithm to obtain the characteristics from a labeled reference node vi to a query unlabeled node vj in a given graph. Based on the computed node representation and between-node reachability, the structure relations can be obtained by computing the similar scores/relationships from reference nodes to unlabeled nodes in a graph. Inspired by the recent meta-learning strategy (Finn et al., 2017), we learn to infer the structure relations from a training set to a validation set, which can benefit the generalization capability of the learned model. In other words, our proposed GIL attempts to learn some transferable knowledge underlying in the structure relations from training samples to validation samples, such that the learned structure relations can be better self-adapted to the new testing stage.\nWe summarize the main contributions of this work as three folds:\n• We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way. The structure relations are well defined by jointly considering node attributes, between-node paths, and graph topological structures.\n• To make the inference model better generalize to test nodes, we introduce a meta-learning procedure to optimize structure relations, which could be the first time for graph node classification to the best of our knowledge.\n• Comprehensive evaluations on three citation network datasets (including Cora, Citeseer, and Pubmed) and one knowledge graph data (i.e., NELL) demonstrate the superiority of our proposed GIL in contrast with other state-of-the-art methods on the semi-supervised classification task." }, { "heading": "2 RELATED WORK", "text": "Graph CNNs: With the rapid development of deep learning methods, various graph convolution neural networks (Kashima et al., 2003; Morris et al., 2017; Shervashidze et al., 2009; Yanardag & Vishwanathan, 2015; Jiang et al., 2019; Zhang et al., 2020) have been exploited to analyze the irregular graph-structured data. For better extending general convolutional neural networks to graph domains, two broad strategies have been proposed, including spectral and spatial convolution methods. Specifically, spectral filtering methods (Henaff et al., 2015; Kipf & Welling, 2017) develop convolution-like operators in the spectral domain, and then perform a series of spectral filters by decomposing the graph Laplacian. Unfortunately, the spectral-based approaches often lead to a high computational complex due to the operation of eigenvalue decomposition, especially for a large number of graph nodes. To alleviate this computation burden, local spectral filtering methods (Defferrard et al., 2016) are then proposed by parameterizing the frequency responses as a Chebyshev polynomial approximation. Another type of graph CNNs, namely spatial methods (Li et al., 2016; Niepert et al., 2016), can perform the filtering operation by defining the spatial structures of adjacent vertices. Various approaches can be employed to aggregate or sort neighboring vertices, such as diffusion CNNs (Atwood & Towsley, 2016), GraphSAGE (Hamilton et al., 2017), PSCN (Niepert et al., 2016), and NgramCNN (Luo et al., 2017). From the perspective of data distribution, recently, the Gaussian induced convolution model (Jiang et al., 2019) is proposed to disentangle the aggregation process through encoding adjacent regions with Gaussian mixture model.\nSemi-supervised node classification: Among various graph-related applications, semi-supervised node classification has gained increasing attention recently, and various approaches have been proposed to deal with this problem, including explicit graph Laplacian regularization and graphembedding approaches. Several classic algorithms with graph Laplacian regularization contain the label propagation method using Gaussian random fields (Zhu et al., 2003), the regularization framework by relying on the local/global consistency (Zhou et al., 2004), and the random walkbased sampling algorithm for acquiring the context information (Yang et al., 2016). To further address scalable semi-supervised learning issues (Liu et al., 2012), the Anchor Graph regularization approach (Liu et al., 2010) is proposed to scale linearly with the number of graph nodes and then applied to massive-scale graph datasets. Several graph convolution network methods (Abu-El-Haija et al., 2018; Du et al., 2017; Thekumparampil et al., 2018; Velickovic et al., 2018; Zhuang & Ma, 2018) are then developed to obtain discriminative representations of input graphs. For example, Kipf et al. (Kipf & Welling, 2017) proposed a scalable graph CNN model, which can scale linearly in the number of graph edges and learn graph representations by encoding both local graph structures and node attributes. Graph attention networks (GAT) (Velickovic et al., 2018) are proposed to compute hidden representations of each node for attending to its neighbors with a self-attention strategy. By jointly considering the local- and global-consistency information, dual graph convolutional networks (Zhuang & Ma, 2018) are presented to deal with semi-supervised node classification. The critical difference between our proposed GIL and those previous semi-supervised node classification methods is to adopt a graph inference strategy by defining structure relations on graphs and then leverage a meta optimization mechanism to learn an inference model, which could be the first time to the best of our knowledge, while the existing graph CNNs take semi-supervised node classification as a general classification task." }, { "heading": "3 THE PROPOSED MODEL", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "Formally, we denote an undirected/directed graph as G = {V, E ,X ,Y}, where V = {vi}ni=1 is the finite set of n (or |V|) vertices, E ∈ Rn×n defines the adjacency relationships (i.e., edges) between vertices representing the topology of G, X ∈ Rn×d records the explicit/implicit attributes/signals of vertices, and Y ∈ Rn is the vertex labels of C classes. The edge Eij = E(vi, vj) = 0 if and only if vertices vi, vj are not connected, otherwise Eij 6= 0. The attribute matrix X is attached to the vertex set V , whose i-th row Xvi (or Xi·) represents the attribute of the i-th vertex vi. It means that vi ∈ V carries a vector of d-dimensional signals. Associated with each node vi ∈ V , there is a discrete label yi ∈ {1, 2, · · · , C}. We consider the task of semi-supervised node classification over graph data, where only a small number of vertices are labeled for the model learning, i.e., |VLabel| |V|. Generally, we have three node sets: a training set Vtr, a validation set Vval, and a testing set Vte. In the standard protocol\nof prior literatures (Yang et al., 2016), the three node sets share the same label space. We follow but do not restrict this protocol for our proposed method. Given the training and validation node sets, the aim is to predict the node labels of testing nodes by using node attributes as well as edge connections. A sophisticated machine learning technique used in most existing methods (Kipf & Welling, 2017; Zhou et al., 2004) is to choose the optimal classifier (trained on a training set) after checking the performance on the validation set. However, these methods essentially ignore how to extract transferable knowledge from these known labeled nodes to unlabeled nodes, as the graph structure itself implies node connectivity/reachability. Moreover, due to the scarcity of labeled samples, the performance of such a classifier is usually not satisfying. To address these issues, we introduce a meta-learning mechanism (Finn et al., 2017; Ravi & Larochelle, 2017; Sung et al., 2017) to learn to infer node labels on graphs. Specifically, the graph structure, between-node path reachability, and node attributes are jointly modeled into the learning process. Our aim is to learn to infer from labeled nodes to unlabeled nodes, so that the learner can perform better on a validation set and thus classify a testing set more accurately." }, { "heading": "3.2 STRUCTURE RELATION", "text": "For convenient inference, we specifically build a structure relation between two nodes on the topology graph. The labeled vertices (in a training set) are viewed as the reference nodes, and their information can be propagated into those unlabeled vertices for improving the label prediction accuracy. Formally, given a reference node vi ∈ VLabel, we define the score of a query node vj similar to vi as si→j = fr(fe(Gvi), fe(Gvj ), fP(vi, vj , E)), (1) where Gvi and Gvj may be understood as the centralized subgraphs around vi and vj , respectively. fe, fr, fP are three abstract functions that we explain as follows:\n• Node representation fe(Gvi) −→ Rdv , encodes the local representation of the centralized subgraph Gvi around node vi, and may thus be understood as a local filter function on graphs. This function should not only take the signals of nodes therein as input, but also consider the local topological structure of the subgraph for more accurate similarity computation. To this end, we perform the spectral graph convolution on subgraphs to learn discriminative node features, analogous to the pixel-level feature extraction from convolution maps of gridded images. The details of feature extraction fe are described in Section 4. • Path reachability fP(vi, vj , E) −→ Rdp , represents the characteristics of path reachability\nfrom vi to vj . As there usually exist multiple traversal paths between two nodes, we choose the function as reachable probabilities of different lengths of walks from vi to vj . More details will be introduced in Section 4. • Structure relation fr(Rdv ,Rdv ,Rdp) −→ R, is a relational function computing the score\nof vj similar to vi. This function is not exchangeable for different orders of two nodes, due to the asymmetric reachable relationship fP . If necessary, we may easily revise it as a symmetry function, e.g., summarizing two traversal directions. The score function depends on triple inputs: the local representations extracted from the subgraphs w.r.t. fe(Gvi) and fe(Gvj ), respectively, and the path reachability from vi to vj .\nIn semi-supervised node classification, we take the training node set Vtr as the reference samples, and the validation set Vval as the query samples during the training stage. Given a query node vj ∈ Vval, we can derive the class similarity score of vj w.r.t. the c-th (c = 1, · · · , C) category by weighting the reference samples Cc = {vk|yvk = c}. Formally, we can further revise Eqn. (1) and define the class-to-node relationship function as\nsCc→j = φr(FCc→vj ∑ vi∈Cc wi→j · fe(Gvi), fe(Gvj )), (2)\ns.t. wi→j = φw(fP(vi, vj , E)), (3) where the function φw maps a reachable vector fP(vi, vj , E) into a weight value, and the function φr computes the similar score between vj and the c-th class nodes. The normalization factor FCc→vj of the c-th category w.r.t. vj is defined as\nFCc→vj = 1∑\nvi∈Cc wi→j . (4)\nFor the relation function φr and the weight function φw, we may choose some subnetworks to instantiate them in practice. The detailed implementation of our model can be found in Section 4." }, { "heading": "3.3 INFERENCE LEARNING", "text": "According to the class-to-node relationship function in Eqn. (2), given a query node vj , we can obtain a score vector sC→j = [sC1→j , · · · , sCC→j ]ᵀ ∈ RC after computing the relations to all classes . The indexed category with the maximum score is assumed to be the estimated label. Thus, we can define the loss function based on cross entropy as follows:\nL = − ∑ vj C∑ c=1 yj,c log ŷCc→j , (5)\nwhere yj,c is a binary indicator (i.e., 0 or 1) of class label c for node vj , and the softmax operation is imposed on sCc→j , i.e., ŷCc→j = exp(sCc→j)/ ∑C k=1 exp(sCk→j). Other error functions may be chosen as the loss function, e.g., mean square error. In the regime of general classification, the cross entropy loss is a standard one that performs well.\nGiven a training set Vtr, we expect that the best performance can be obtained on the validation set Vval after optimizing the model on Vtr. Given a trained/pretrained model Θ = {fe, φw, φr}, we perform iteratively gradient updates on the training set Vtr to obtain the new model, formally,\nΘ′ = Θ− α∇ΘLtr(Θ), (6)\nwhere α is the updating rate. Note that, in the computation of class scores, since the reference node and query node can be both from the training set Vtr, we set the computation weight wi→j = 0 if i = j in Eqn. (3). After several iterates of gradient descent on Vtr, we expect a better performance on the validation set Vval, i.e., min\nΘ Lval(Θ′). Thus, we can perform the gradient update as follows\nΘ = Θ− β∇ΘLval(Θ′), (7)\nwhere β is the learning rate of meta optimization (Finn et al., 2017).\nDuring the training process, we may perform batch sampling from training nodes and validation nodes, instead of taking all one time. In the testing stage, we may take all training nodes and perform the model update according to Eqn. (6) like the training process. The updated model is used as the final model and is then fed into Eqn. (2) to infer the class labels for those query nodes." }, { "heading": "4 MODULES", "text": "In this section, we instantiate all modules (i.e., functions) of the aforementioned structure relation. The implementation details can be found in the following.\nNode Representation fe(Gvi): The local representation at vertex vi can be extracted by performing the graph convolution operation on subgraph Gvi . Similar to gridded images/videos, on which local convolution kernels are defined as multiple lattices with various receptive fields, the spectral graph convolution is used to encode the local representations of an input graph in our work.\nGiven a graph sample G = {V, E ,X}, the normalized graph Laplacian matrix is L = In − D−1/2ED−1/2 = UΛUT , with a diagonal matrix of its eigenvalues Λ. The spectral graph convolution can be defined as the multiplication of signal X with a filter gθ(Λ) = diag(θ) parameterized by θ in the Fourier domain: conv(X ) = gθ(L) ∗ X = Ugθ(Λ)UTX , where parameter θ ∈ Rn is a vector of Fourier coefficients. To reduce the computational complexity and obtain the local information, we use an approximate local filter of the Chebyshev polynomial (Defferrard et al., 2016), gθ(Λ) = ∑K−1 k=0 θkTk(Λ̂), where parameter θ ∈ RK is a vector of Chebyshev coefficients and Tk(Λ̂) ∈ Rn×n is the Chebyshev polynomial of order k evaluated at Λ̂ = 2Λ/λmax − In, a diagonal matrix of scaled eigenvalues. The graph filtering operation can then be expressed as gθ(Λ) ∗ X = ∑K−1 k=0 θkTk(L̂)X , where Tk(L̂) ∈ Rn×n is the Chebyshev polynomial of order k evaluated at the scaled Laplacian L̂ = 2L/λmax−In. Further, we can construct multi-scale receptive fields for each vertex based on the Laplacian matrix L, where each receptive field records hopping neighborhood relationships around the reference vertex vi, and forms a local centralized subgraph.\nPath Reachability fP(vi, vj , E): Here we compute the probabilities of paths from vertex i to vertex j by employing random walks on graphs, which refers to traversing the graph from vi to vj according to the probability matrix P. For the input graph G with n vertices, the random-walk transition matrix\ncan be defined as P = D−1E , where D ∈ Rn×n is the diagonal degree matrix with Dii = ∑ i Eij . That is to say, each element Pij is the probability of going from vertex i to vertex j in one step.\nThe sequence of nodes from vertex i to vertex j is a random walk on the graph, which can be modeled as a classical Markov chain by considering the set of graph vertices. To represent this formulation, we show that P tij is the probability of getting from vertex vi to vertex vj in t steps. This fact is easily exhibited by considering a t-step path from vertex vi to vertex vj as first taking a single step to some vertex h, and then taking t− 1 steps to vj . The transition probability P t in t steps can be formulated as\nP tij = Pij if t = 1∑ h PihP t−1 h,j if t > 1 , (8)\nwhere each matrix entry P tij denotes the probability of starting at vertex i and ending at vertex j in t steps. Finally, the node reachability from vi to vj can be written as a dp-dimensional vector:\nfP(vi, vj , E) = [Pij , P 2ij , . . . , P dp ij ], (9)\nwhere dp refers to the step length of the longest path from vi to vj .\nClass-to-Node Relationship sCc→j: To define the node relationship si→j from vi to vj , we simultaneously consider the property of path reachability fP(vi, vj , E), local representations fe(Gvi), and fe(Gvj ) of nodes vi, vj . The function φw(fP(vi, vj , E)) in Eqn. (3), which is to map the reachable vector fP(vi, vj , E) ∈ Rdp into a weight value, can be implemented with two 16-dimensional fully connected layers in our experiments. The computed value wi→j can be further used to weight the local features at node vi, fe(Gvi) ∈ Rdv . For obtaining the similar score between vj and the c-th class nodes Cc in Eqn. (2), we perform a concatenation of two input features, where one refers to the weighted features of vertex vi, and another is the local features of vertex vj . One fully connected layer (w.r.t. φr) with C-dimensions is finally adopted to obtain the relation regression score." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETTINGS", "text": "We evaluate our proposed GIL method on three citation network datasets: Cora, Citeseer, Pubmed (Sen et al., 2008), and one knowledge graph NELL dataset (Carlson et al., 2010). The statistical properties of graph data are summarized in Table 1. Following the previous protocol in (Kipf & Welling, 2017; Zhuang & Ma, 2018), we split the graph data into a training set, a validation set, and a testing set. Taking into account the graph convolution and pooling modules, we may alternately stack them into a multi-layer Graph convolutional network. The GIL model consists of two graph convolution layers, each of which is followed by a mean-pooling layer, a class-to-node relationship regression module, and a final softmax layer. We have given the detailed configuration of the relationship regression module in the class-to-node relationship of Section 4. The parameter dp in Eqn. (9) is set to the mean length of between-node reachability paths in the input graph. The channels of the 1-st and 2-nd convolutional layers are set to 128 and 256, respectively. The scale of the respective filed is 2 in both convolutional layers. The dropout rate is set to 0.5 in the convolution and fully connected layers to avoid over-fitting, and the ReLU unit is leveraged as a nonlinear activation function. We pre-train our proposed GIL model for 200 iterations with the training set, where its initial learning rate, decay factor, and momentum are set to 0.05, 0.95, and 0.9, respectively. Here we train the GIL model using the stochastic gradient descent method with the batch size of 100. We further improve the inference learning capability of the GIL model for 1200 iterations with the validation set, where the meta-learning rates α and β are both set to 0.001." }, { "heading": "5.2 COMPARISON WITH STATE-OF-THE-ARTS", "text": "We compare the GIL approach with several state-of-the-art methods (Monti et al., 2017; Kipf & Welling, 2017; Zhou et al., 2004; Zhuang & Ma, 2018) over four graph datasets, including Cora, Citeseer, Pubmed, and NELL. The classification accuracies for all methods are reported in Table 2. Our proposed GIL can significantly outperform these graph Laplacian regularized methods on four graph datasets, including Deep walk (Zhou et al., 2004), modularity clustering (Brandes et al., 2008), Gaussian fields (Zhu et al., 2003), and graph embedding (Yang et al., 2016) methods. For example, we can achieve much higher performance than the deepwalk method (Zhou et al., 2004), e.g., 43.2% vs 74.1% on the Citeseer dataset, 65.3% vs 83.1% on the Pubmed dataset, and 58.1% vs 78.9% on the NELL dataset. We find that the graph embedding method (Yang et al., 2016), which has considered both label information and graph structure during sampling, can obtain lower accuracies than our proposed GIL by 9.4% on the Citeseer dataset and 10.5% on the Cora dataset, respectively. This indicates that our proposed GIL can better optimize structure relations and thus improve the network generalization. We further compare our proposed GIL with several existing deep graph embedding methods, including graph attention network (Velickovic et al., 2018), dual graph convolutional networks (Zhuang & Ma, 2018), topology adaptive graph convolutional networks (Du et al., 2017), Multi-scale graph convolution (Abu-El-Haija et al., 2018), etc. For example, our proposed GIL achieves a very large gain, e.g., 86.2% vs 83.3% (Du et al., 2017) on the Cora dataset, and 78.9% vs 66.0% (Kipf & Welling, 2017) on the NELL dataset. We evaluate our proposed GIL method on a large graph dataset with a lower label rate, which can significantly outperform existing baselines on the Pubmed dataset: 3.1% over DGCN (Zhuang & Ma, 2018), 4.1% over classic GCN (Kipf & Welling, 2017) and TAGCN (Du et al., 2017), 3.2% over AGNN (Thekumparampil et al., 2018), and 3.6% over N-GCN (Abu-El-Haija et al., 2018). It demonstrates that our proposed GIL performs very well on various graph datasets by building the graph inference learning process, where the limited label information and graph structures can be well employed in the predicted framework." }, { "heading": "5.3 ANALYSIS", "text": "from a training set to a validation set. In other words, the validation set is only used to teach the model itself how to transfer to unseen data. In contrast, the conventional methods often employ a validation set to tune parameters of a certain model of interest.\nInfluence of different between-node steps: We compare the classification performance within different between-node steps for our proposed GIL and GCN (Kipf & Welling, 2017), as illustrated in Fig. 2(a). The length of between-node steps can be computed with the shortest path between reference nodes and query nodes. When the step between nodes is smaller, both GIL and GCN methods can predict the category information for a small part of unlabeled nodes in the testing set. The reason may be that the node category information may be disturbed by its nearest neighboring nodes with different labels and fewer nodes are within 1 or 2 steps in the testing set. The GIL and GCN methods can infer the category information for a part of unlabeled nodes by adopting node attributes, when two nodes are not connected in the graph (i.e., step=∞). By increasing the length of reachability path, the inference process of the GIL method would become difficult and more graph structure information may be employed in the predicted process. GIL can outperform the classic GCN by analyzing the accuracies within different between-node steps, which indicates that our proposed GIL has a better reference capability than GCN by using the meta-optimization mechanism from training nodes to validation nodes.\nlabel rate 0.30% 0.60% 0.90% 1.20% 1.50% 1.80% GCN 0.792 0.797 0.805 0.824 0.829 0.834 GIL(ours) 0.817 0.824 0.831 0.836 0.838 0.842\nInfluence of different label rates: We also explore the performance comparisons of the GIL method with different label rates, and the detailed results on the Pubmed dataset can be shown in Fig. 2(b). When label rates increase by multiplication, the performances of GIL and GCN are improved, but the relative gain becomes narrow. The reason is that, the reachable path lengths between unlabeled nodes and labeled nodes will be reduced with the increase of labeled nodes, which will weaken the effect of inference learning. In the extreme case, labels of unlabeled nodes could be determined by those neighbors with the 1 ∼ 2 step reachability. In summary, our proposed GIL method prefers small ratio labeled nodes on the semi-supervised node classification task.\nInference learning process: Classification errors of different epochs on the validation set of the Cora dataset can be illustrated in Fig. 3. Classification errors are rapidly decreasing as the number of iterations increases from the beginning to 400 iterations, while they are with a slow descent from 400 iterations to 1200 iterations. It demonstrates that the learned knowledge from the training samples can be transferred for inferring node category information from these reference labeled nodes. The performance of semi-supervised classification can be further increased by improving the generalized capability of the Graph CNN model.\nModule analysis: We evaluate the effectiveness of different modules within our proposed GIL framework, including node representation fe, path reachability fP , and structure relation fr. Note that the last one fr defines on the former two ones, so we consider the cases in Table 4 by adding modules. When not using all modules, only original attributes of nodes are used to predict labels. The case of only using fe belongs to the GCN method, which can achieve 81.5% on the Cora dataset. The large gain of using the relation module fr (i.e., from 81.5% to 85.0%) may be contributed to the ability of inference learning on attributes as well as local topology structures which are implicitly encoded in fe. The path information fP can further boost the performance by 1.2%, e.g., 86.2% vs 85.0%. It demonstrates that three different modules of our method can improve the graph inference learning capability.\nComputational complexity: For the computational complexity of our proposed GIL, the cost is mainly spent on the computations of node representation, between-node reachability, and class-tonode relationship, which are about O((ntr + nte) ∗ e ∗ din ∗ dout), O((ntr + nte) ∗ e ∗ P ), and O(ntr ∗ nted2out), respectively. ntr and nte refer to the numbers of training and testing nodes, din and dout denote the input and output dimensions of node representation, e is about the average degree of graph node, and P is the step length of node reachability. Compared with those classic Graph CNNs (Kipf & Welling, 2017), our proposed GIL has a slightly higher cost due to an extra inference learning process, but can complete the testing stage with several seconds on these benchmark datasets." }, { "heading": "6 CONCLUSION", "text": "In this work, we tackled the semi-supervised node classification task with a graph inference learning method, which can better predict the categories of these unlabeled nodes in an end-to-end framework. We can build a structure relation for obtaining the connection between any two graph nodes, where node attributes, between-node paths, and graph structure information can be encapsulated together. For better capturing the transferable knowledge, our method further learns to transfer the mined knowledge from the training samples to the validation set, finally boosting the prediction accuracy of the labels of unlabeled nodes in the testing set. The extensive experimental results demonstrate the effectiveness of our proposed GIL for solving the semi-supervised learning problem, even in the few-shot paradigm. In the future, we would extend the graph inference method to handle more graph-related tasks, such as graph generation and social network analysis." }, { "heading": "ACKNOWLEDGMENT", "text": "This work was supported by the National Natural Science Foundation of China (Nos. 61972204, 61906094, U1713208), the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20191283 and BK20190019), and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201922)." } ]
2,020
null
SP:bbcb77fc764f7e90ef6126d97d8195734fcdafe8
[ "This paper deals with 3 theoretical properties of ridge regression. First, it proves that the ridge regression estimator is equivalent to a specific representation which is useful as for instance it can be used to derive the training error of the ridge estimator. Second, it provides a bias correction mechanism for ridge regression and finally it provides proofs regarding the accuracy of several sketching algorithms for ridge regression.", "This paper presents a theoretical study of ridge regression, focusing on the practical problems of correcting for the bias of the cross-validation based estimate of the optimal regularisation parameter, and quantification of the asymptotic risk of sketching algorithms for ridge regression, both in the p / n -> gamma in (0, 1) regime (n = # data points, p = # dimensions). The authors derive most of their results exploiting their (AFAICT) new asymptotic characterisation of the ridge regression estimator which may be of independent interest. The whole study is complemented by a series of numerical experiments." ]
We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified large-data linear model. We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of K-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction. We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate. Our results are illustrated by simulations and by analyzing empirical data.
[ { "affiliations": [], "name": "Sifan Liu" } ]
[ { "authors": [ "2017. Nir Ailon", "Bernard Chazelle" ], "title": "Approximate nearest neighbors and the fast johnson-lindenstrauss transform", "venue": null, "year": 2017 }, { "authors": [ "Theodore W Anderson" ], "title": "An Introduction to Multivariate Statistical Analysis", "venue": null, "year": 2003 }, { "authors": [ "Romain Couillet", "Merouane Debbah" ], "title": "Random Matrix Methods for Wireless Communications", "venue": "gression. In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Paramveer Dhillon", "Yichao Lu", "Dean P Foster", "Lyle Ungar" ], "title": "New subsampling algorithms for fast least", "venue": null, "year": 2011 }, { "authors": [ "Lee H Dicker" ], "title": "Advances in neural information processing", "venue": null, "year": 2013 }, { "authors": [ "2016. Edgar Dobriban", "Sifan Liu" ], "title": "A new theory for sketching in linear regression", "venue": null, "year": 2016 }, { "authors": [ "Edgar Dobriban", "Yue Sheng" ], "title": "Distributed linear regression by averaging", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Petros Drineas", "Michael W Mahoney" ], "title": "RandNLA: randomized numerical linear algebra", "venue": "Communications of the ACM,", "year": 2016 }, { "authors": [ "Petros Drineas", "Michael W Mahoney" ], "title": "Lectures on randomized numerical linear algebra", "venue": "arXiv preprint arXiv:1712.08880,", "year": 2017 }, { "authors": [ "Petros Drineas", "Michael W Mahoney", "S Muthukrishnan" ], "title": "Sampling algorithms for l 2 regression and applications", "venue": "In Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm,", "year": 2006 }, { "authors": [ "Petros Drineas", "Michael W Mahoney", "S Muthukrishnan", "Tamás Sarlós" ], "title": "Faster least squares approximation", "venue": "Numerische mathematik,", "year": 2011 }, { "authors": [ "Ahmed el Alaoui", "Michael W Mahoney" ], "title": "Fast randomized kernel ridge regression with statistical guarantees", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Noureddine El Karoui" ], "title": "On the impact of predictor geometry on the performance on high-dimensional ridgeregularized generalized robust regression estimators", "venue": "Probability Theory and Related Fields,", "year": 2018 }, { "authors": [ "Noureddine El Karoui", "Holger Kösters" ], "title": "Geometric sensitivity of random matrix results: consequences for shrinkage estimators of covariance and related statistical methods", "venue": "arXiv preprint arXiv:1105.1404,", "year": 2011 }, { "authors": [ "Alon Gonen", "Francesco Orabona", "Shai Shalev-Shwartz" ], "title": "Solving ridge regression using sketched preconditioned svrg", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Walid Hachem", "Philippe Loubaton", "Jamal Najim" ], "title": "Deterministic equivalents for certain functionals of large random matrices", "venue": "The Annals of Applied Probability,", "year": 2007 }, { "authors": [ "Nathan Halko", "Per-Gunnar Martinsson", "Joel A Tropp" ], "title": "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", "venue": "SIAM review,", "year": 2011 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The elements of statistical learning", "venue": "Springer series in statistics,", "year": 2009 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J Tibshirani" ], "title": "Surprises in high-dimensional ridgeless least squares interpolation", "venue": "arXiv preprint arXiv:1903.08560,", "year": 2019 }, { "authors": [ "Fumio Hiai", "Dénes Petz" ], "title": "The semicircle law, free random variables and entropy. Number 77", "venue": "American Mathematical Soc.,", "year": 2006 }, { "authors": [ "Zengfeng Huang" ], "title": "Near optimal frequent directions for sketching dense and sparse matrices", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Miles E Lopes", "Shusen Wang", "Michael W Mahoney" ], "title": "Error estimation for randomized least-squares algorithms via the bootstrap", "venue": "arXiv preprint arXiv:1803.08021,", "year": 2018 }, { "authors": [ "Ping Ma", "Michael W Mahoney", "Bin Yu" ], "title": "A statistical perspective on algorithmic leveraging", "venue": "The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "J Tibshirani", "Robert Tibshirani" ], "title": "A bias correction for the minimum error rate in cross-validation", "venue": "Big and complex data analysis,", "year": 2017 }, { "authors": [ "2013a. Lijun Zhang", "Mehrdad Mahdavi", "Rong Jin", "Tianbao Yang", "Shenghuo Zhu" ], "title": "Recovering the optimal solution", "venue": "Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "‖Cn‖tr < ∞ (Dobriban", "Sheng" ], "title": "2018). Informally, linear combinations of the entries of An can be approximated by the entries of Bn", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified large-data linear model. We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise. We study the bias of K-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction. We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate. Our results are illustrated by simulations and by analyzing empirical data." }, { "heading": "1 INTRODUCTION", "text": "Ridge or `2-regularized regression is a widely used method for prediction and estimation when the data dimension p is large compared to the number of datapoints n. This is especially so in problems with many good features, where sparsity assumptions may not be justified. A great deal is known about ridge regression. It is Bayes optimal for any quadratic loss in a Bayesian linear model where the parameters and noise are Gaussian. The asymptotic properties of ridge have been widely studied (e.g., Tulino & Verdú, 2004; Serdobolskii, 2007; Couillet & Debbah, 2011; Dicker, 2016; Dobriban & Wager, 2018, etc). For choosing the regularization parameter in practice, cross-validation (CV) is widely used. In addition, there is an exact shortcut (e.g., Hastie et al., 2009, p. 243), which has good consistency properties (Hastie et al., 2019). There is also a lot of work on fast approximate algorithms for ridge, e.g., using sketching methods (e.g., el Alaoui & Mahoney, 2015; Chen et al., 2015; Wang et al., 2018; Chowdhury et al., 2018, among others).\nHere we seek to develop a deeper understanding of ridge regression, going beyond existing work in multiple aspects. We work in linear models under a popular asymptotic regime where n, p→∞ at the same rate (Marchenko & Pastur, 1967; Serdobolskii, 2007; Couillet & Debbah, 2011; Yao et al., 2015). In this framework, we develop a fundamental representation for ridge regression, which shows that it is well approximated by a linear scaling of the true parameters perturbed by noise. The scaling matrices are functions of the population-level covariance of the features. As a consequence, we derive formulas for the training error and bias-variance tradeoff of ridge.\nSecond, we study commonly used methods for choosing the regularization parameter. Inspired by the observation that CV has a bias for estimating the error rate (e.g., Hastie et al., 2009, p. 243), we study the bias of CV for selecting the regularization parameter. We discover a surprisingly simple form for the bias, and propose a downward scaling bias correction procedure. Third, we study the accuracy loss of a class of randomized sketching algorithms for ridge regression. These algorithms approximate the sample covariance matrix by sketching or random projection. We show they can be surprisingly accurate, e.g., they can sometimes cut computational cost in half, only incurring 5% extra error. Even more, they can sometimes improve the MSE if a suboptimal regularization parameter is originally used.\nOur work leverages recent results from asymptotic random matrix theory and free probability theory. One challenge in our analysis is to find the limit of the trace tr (Σ1 + Σ−12 )\n−1/p, where Σ1 and Σ2 are p × p independent sample covariance matrices of Gaussian random vectors. The calculation requires nontrivial aspects of freely additive convolutions (e.g., Voiculescu et al., 1992; Nica & Speicher, 2006).\nOur work is connected to prior works on ridge regression in high-dimensional statistics (Serdobolskii, 2007) and wireless communications (Tulino & Verdú, 2004; Couillet & Debbah, 2011). Among other related works, El Karoui & Kösters (2011) discuss the implications of the geometric sensitivity of random matrix theory for ridge regression, without considering our problems. El Karoui (2018) and Dicker (2016) study ridge regression estimators, but focus only on the risk for identity covariance. Hastie et al. (2019) study “ridgeless” regression, where the regularization parameter tends to zero.\nSketching is an increasingly popular research topic, see Vempala (2005); Halko et al. (2011); Mahoney (2011); Woodruff (2014); Drineas & Mahoney (2017) and references therein. For sketched ridge regression, Zhang et al. (2013a;b) study the dual problem in a complementary finite-sample setting, and their results are hard to compare. Chen et al. (2015) propose an algorithm combining sparse embedding and the subsampled randomized Hadamard transform (SRHT), proving relative approximation bounds. Wang et al. (2017) study iterative sketching algorithms from an optimization point of view, for both the primal and the dual problems. Dobriban & Liu (2018) study sketching using asymptotic random matrix theory, but only for unregularized linear regression. Chowdhury et al. (2018) propose a data-dependent algorithm in light of the ridge leverage scores. Other related works include Sarlos (2006); Ailon & Chazelle (2006); Drineas et al. (2006; 2011); Dhillon et al. (2013); Ma et al. (2015); Raskutti & Mahoney (2016); Gonen et al. (2016); Thanei et al. (2017); Ahfock et al. (2017); Lopes et al. (2018); Huang (2018).\nThe structure of the paper is as follows: We state our results on representation, risk, and biasvariance tradeoff in Section 2. We study the bias of cross-validation for choosing the regularization parameter in Section 3. We study the accuracy of randomized primal and dual sketching for both orthogonal and Gaussian sketches in Section 4. We provide proofs and additional simulations in the Appendix. Code reproducing the experiments in the paper are available at https://github. com/liusf15/RidgeRegression." }, { "heading": "2 RIDGE REGRESSION", "text": "We work in the usual linear regression model Y = Xβ + ε, where each row xi of X ∈ Rn×p is a datapoint in p dimensions, and so there are p features. The corresponding element yi of Y ∈ Rn is its continous response (or outcome). We assume mean zero uncorrelated noise, so Eε = 0, and Cov [ε] = σ2In. We estimate the coefficient β ∈ Rp by ridge regression, solving the optimization problem\nβ̂ = arg min β∈Rp\n1 n ‖Y −Xβ‖22 + λ‖β‖22,\nwhere λ > 0 is a regularization parameter. The solution has the closed form β̂ = ( X>X/n+ λIp )−1 X>Y/n. (1)\nWe work in a ”big data” asymptotic limit, where both the dimension p and the sample size n tend to infinity, and their aspect ratio converges to a constant, p/n → γ ∈ (0,∞). Our results can be interpreted for any n and p, using γ = p/n as an approximation.\nWe recall that the empirical spectral distribution (ESD) of a p×p symmetric matrix Σ is the distribution 1p ∑p i=1 δλi where λi, i = 1, . . . , p are the eigenvalues of Σ, and δx is the point mass at x. This is a standard notion in random matrix theory, see e.g., Marchenko & Pastur (1967); Tulino & Verdú (2004); Couillet & Debbah (2011); Yao et al. (2015). The ESD is a convenient tool to summarize all information obtainable from the eigenvalues of a matrix. For instance, the trace of Σ is proportional to the mean of the distribution, while the condition number is related to the range of the support. As is common, we will work in models where there is a sequence of covariance matrices Σ = Σp, and their ESDs converges in distribution to a limiting probability distribution. The results become simpler, because they depend only on the limit.\nBy extension, we say that the ESD of the n× p matrix X is the ESD of X>X/n. We will consider some very specific models for the data, assuming it is of the form X = UΣ1/2, where U has iid entries of zero mean and unit variance. This means that the datapoints, i.e., the rows of X , have the form xi = Σ1/2ui, i = 1, . . . , p, where ui have iid entries. Then Σ is the ”true” covariance matrix of the features, which is typically not observed. These types of models for the data are very common in random matrix theory, see the references mentioned above.\nUnder these models, it is possible to characterize precisely the deviations between the empirical covariance matrix Σ̂ = n−1X>X and the population covariance matrix Σ, dating back to the well known classical Marchenko-Pastur law for eigenvectors (Marchenko & Pastur, 1967), extended to more general models and made more precise, including results for eigenvectors (see e.g., Tulino & Verdú, 2004; Couillet & Debbah, 2011; Yao et al., 2015, and references therein). This has been used to study methods for estimating the true covariance matrix, with several applications (e.g., Paul & Aue, 2014; Bun et al., 2017). More recently, such models have been used to study high dimensional statistical learning problems, including classification and regression (e.g., Zollanvari & Genton, 2013; Dobriban & Wager, 2018). Our work falls in this line.\nWe start by finding a precise representation of the ridge estimator. For random vectors un, vn of growing dimension, we say un and vn are deterministic equivalents, if for any sequence of fixed (or random and independent of un, vn) vectors wn such that lim sup ‖wn‖2 <∞ almost surely, we have |w>n (un − vn)| → 0 almost surely. We denote this by un vn. Thus linear combinations of un are well approximated by those of vn. This is a somewhat non-standard definition, but it turns out that it is precisely the one we need to use prior results from random matrix theory such as from (Rubio & Mestre, 2011).\nWe extend scalar functions f : R→ R to matrices in the usual way by functional calculus, applying them to the eigenvalues and keeping the eigenvectors. If M = V ΛV > is a spectral decomposition of M , then we define f(M) := V f(Λ)V >, where f(Λ) is the diagonal matrix with entries f(Λii).\nFor a fixed design matrix X , we can write the estimator as\nβ̂ = (Σ̂ + λIp) −1Σ̂β + (Σ̂ + λIp)\n−1X >ε\nn .\nHowever, for a random design, we can find a representation that depends on the true covariance Σ, which may be simpler when Σ is simple, e.g., when Σ = Ip is isotropic. Theorem 2.1 (Representation of ridge estimator). Suppose the data matrix has the form X = UΣ1/2, where U ∈ Rn×p has iid entries of zero mean, unit variance and finite 8 + c-th moment for some c > 0, and Σ = Σp ∈ Rp×p is a deterministic positive definite matrix. Suppose that n, p→∞ with p/n→ γ > 0. Suppose the ESD of the sequence of Σs converges in distribution to a probability measure with compact support bounded away from the origin. Suppose that the noise is Gaussian, and that β = βp is an arbitrary sequence of deterministic vectors, such that lim sup ‖β‖2 <∞. Then the ridge regression estimator is asymptotically equivalent to a random vector with the following representation:\nβ̂(λ) A(Σ, λ) · β +B(Σ, λ) · σ · Z p1/2 .\nHere Z ∼ N (0, Ip) is a random vector that is stochastically dependent only on the noise ε, and A,B are deterministic matrices defined by applying the scalar functions below to Σ:\nA(x, λ) = (cpx+ λ) −2(cp + c ′ p)x, B(x, λ) = (cpx+ λ) −1cpx.\nHere cp := c(n, p,Σ, λ) is the unique positive solution of the fixed point equation\n1− cp = cp n\ntr [ Σ(cpΣ + λI) −1] . (2) This result gives a precise representation of the ridge regression estimator. It is a sum of two terms: the true coefficient vector β scaled by the matrixA(Σ, λ), and the noise vectorZ scaled by the matrix B(Σ, λ). The first term captures to what extent ridge regression recovers the ”signal”. Morever, the\nnoise term Z is directly coupled with the noise in the original regression problem, and thus also the estimator. The result would not hold for an independent noise vector Z.\nHowever, the coefficients are not fully explicit, as they depend on the unknown population covariance matrix Σ, as well as on the fixed-point variable cp.\nSome comments are in order:\n1. Structure of the proof. The proof is quite non-elementary and relies on random matrix theory. Specifically, it uses the language of the recently developed ”calculus of deterministic equivalents” (Dobriban & Sheng, 2018), and results by (Rubio & Mestre, 2011). A general takeaway is that for n not much larger than p, the empirical covariance matrix Σ̂ is not a good estimator of the true covariance matrix Σ. However, the deviation of linear functionals of Σ̂, can be quantified. In particular, we have\n(Σ̂ + λI)−1 (cpΣ + λI)−1,\nin the sense that linear combinations of the entries of the two matrices are close (see the proof for more details). 2. Understanding the resolvent bias factor cp. Thus, cp can be viewed as a resolvent bias factor, which tells us by what factor Σ is multiplied when evaluating the resolvent (Σ̂ + λI)−1, and comparing it to its naive counterpart (Σ + λI)−1. It is known that cp is well defined, and this follows by a simple monotonicity argument, see Hachem et al. (2007); Rubio & Mestre (2011). Specifically, the left hand side of (2) is decreasing in cp, while the right hand size is increasing in Also c′p is the derivative of cp, when viewing it as a function of z := −λ. An explicit expression is provided in the proof in Section A.1, but is not crucial right now.\nHere we discuss some implications of this representation.\nFor uncorrelated features, Σ = Ip, A,B reduce to multiplication by scalars. Hence, each coordinate of the ridge regression estimator is simply a scalar multiple of the corresponding coordinate of β. One can use this to find the bias in each individual coordinate.\nTraining error and optimal regularization parameter. This theorem has implications for understanding the training error, and optimal regularization parameter of ridge regression. As it stands, the theorem itself only characterizes the behavior og linear combinations of the coordinates of the estimator. Thus, it can be directly applied to study the bias Eβ̂(λ) − β of the estimator. However, it cannot directly be used to study the variance; as that would require understanding quadratic functionals of the estimator. This seems to require significant advances in random matrix theory, going beyond the results of Rubio & Mestre (2011). However, we show below that with additional assumptions on the structure of the parameter β, we can derive the MSE of the estimator in other ways.\nWe work in a random-effects model, where the p-dimensional regression parameter β is random, each coefficient has zero mean Eβi = 0, and is normalized so that Varβi = α2/p. This ensures that the signal strength E‖β‖2 = α2 is fixed for any p. The asymptotically optimal λ in this setting is always λ∗ = γσ2/α2 see e.g., Tulino & Verdú (2004); Dicker (2016); Dobriban & Wager (2018). The ridge regression estimator with λ = pσ2/(nα2) is the posterior mean of β, when β and ε are normal random variables.\nFor a distribution F , we define the quantities\nθi(λ) =\n∫ 1\n(x+ λ)i dFγ(x),\n(i = 1, 2, . . .). These are the moments of the resolvent and its derivatives (up to constants). We use the following loss functions: mean squared estimation error: M(β̂) = E‖β̂ − β‖22, and residual or training error: R(β̂) = E [‖]Y −Xβ̂‖22. Theorem 2.2 (MSE and training error of ridge). Suppose β has iid entries with Eβi = 0, Var [βi] = α2/p, i = 1, . . . , p and β is independent of X and ε. Suppose X is an arbitrary n × p matrix depending on n and p, and the ESD of X converges weakly to a deterministic distribution F as n, p → ∞ and p/n → γ. Then the asymptotic MSE and residual error of the ridge regression estimator β̂(λ) has the form\nlim n→∞\nM(β̂(λ)) = α2λ2θ2 + γσ 2[θ1 − λθ2], (3)\nlim n→∞\nR(β̂(λ)) = α2λ2[θ1 − λθ2] + σ2 [ 1− γ(1 + λθ1 − λ2θ2) ] , (4)\nBias-variance tradeoff. Building on this, we can also study the bias-variance tradeoff of ridge regression. Qualitatively, large λ leads to more regularization, and decreases the variance. However, it also increases the bias. Our theory allows us to find the explicit formulas for the bias and variance as a function of λ. See Figure 1 for a plot and Sec. A.3 for the details. As far as we know, this is one of the few examples of high-dimensional asymptotic problems where the precise form of the bias and variance can be evaluated.\nBias-variance tradeoff at optimal λ∗ = γσ2/α2. (see Figure 6) This can be viewed as the ”pure” effect of dimensionality on the problem, keeping all other parameters fixed, and has intriguing properties. The variance first increases, then decreases with γ. In the ”classical” low-dimensional case, most of the risk is due to variance, while in the ”modern” high-dimensional case, most of it is due to bias. This is consistent with other phenomena in proportional-limit asymptotics, e.g., that the map between population and sample eigenvalue distributions is asymptotically deterministic (Marchenko & Pastur, 1967).\nFuture applications. This fundamental representation may have applications to important statistical inference questions. For instance, inference on the regression coefficient β and the noise variance σ2 are important and challenging problems. Can we use our representation to develop debiasing techniques for this task? This will be interesting to explore in future work." }, { "heading": "3 CROSS-VALIDATION", "text": "How can we choose the regularization parameter? In practice, cross-validation (CV) is the most popular approach. However, it is well known that CV has a bias for estimating the error rate, because it uses a smaller number of samples than the full data size (e.g., Hastie et al., 2009, p. 243). In this section, we study related questions, proposing a bias-correction method for the optimal regularization parameter. This is closely connected to the previous section, because it relies on the same random-effects theoretical framework. In fact, our conclusions here are a direct consequence of the properties of that framework.\nSetup. Suppose we split the n datapoints (samples) into K equal-sized subsets, each containing n0 = n/K samples. We use the k-th subset (Xk, Yk) as the validation set and the other K − 1 subsets (X−k, Y−k), with total sample size n1 = (K − 1)n/K as the training set. We find the ridge\nregression estimator β̂−k, i.e. β̂−k(λ) = ( X>−kX−k + n1λIp )−1 X>−kY−k.\nThe expected cross-validation error is, for isotropic covariance, i.e., Σ = I ,\nCV (λ) = EĈV (λ) = E\n[ 1\nK K∑ k=1 ‖Yk −Xkβ̂−k(λ)‖22/n0\n] = σ2 + E [ ‖β̂−k − β‖22 ] .\nBias in CV. When n, p tend to infinity so that p/n → γ > 0, and in the random effects model with Eβi = 0, Varβi = α2/p described above, the minimizer of CV (λ) tends to λ∗k = γ̃σ2/α2, where γ̃ is the limiting aspect ratio of X−k, i.e. γ̃ = γK/(K − 1). Since the aspect ratios of X−k and X differ, the limiting minimizer of the cross-validation estimator of the test error is biased for the limiting minimizer of the actual test error, which is λ∗ = γσ2/α2.\nBias-correction. Suppose we have found λ̂∗k, the minimizer of ĈV (λ). Afterwards, we usually refit ridge regression on the entire dataset, i.e., find\nβ̂(λ̂∗) = (X>X + λ̂∗nI)−1X>Y.\nBased on our bias calculation, we propose to use a bias-corrected parameter\nλ̂∗ := λ̂∗k K − 1 K .\nSo if we use 5 folds, we should multiply the CV-optimal λ by 0.8. We find it surprising that this theoretically justified bias-correction does not depend on any unknown parameters, such as β, α2, σ2.While the bias of CV is widely known, we are not aware that this bias-correction for the regularization parameter has been proposed before.\nNumerical examples. Figure 2 shows on two empirical data examples that the debiased estimator gets closer to the optimal λ than the original minimizer of the CV. However, in this case it does not significantly improve the test error. Simulation results in Section A.4 also show that the bias-correction correctly shrinks the regularization parameter and decreases the test error. We also consider examples where p n (i.e., γ 1), because this is a setting where it is known that the bias of CV can be large (Tibshirani & Tibshirani, 2009). However, in this case, we do not see a significant improvement.\nExtensions. The same bias-correction idea also applies to train-test validation. In addition, there is a special fast “short-cut” for leave-one-out cross-validation in ridge regression (e.g., Hastie et al.,\n2009, p. 243), which has the same cost as one ridge regression. The minimizer converges to λ∗ (Hastie et al., 2019). However, we think that the bias-correction idea is still valuable, as the idea applies beyond ridge regression: CV selects regularization parameters that are too large. See Section A.5 for more details and experiments comparing different ways of choosing the regularization parameter." }, { "heading": "4 SKETCHING", "text": "A final important question about ridge regression is how to compute it in practice. In this section, we study that problem in the same high-dimensional model used throughout our paper. The computation complexity of ridge regression, O(np min(n, p)), can be intractable in modern large-scale data analysis. Sketching is a popular approach to reducing the time complexity by reducing the sample size and/or dimension, usually by random projection or sampling (e.g. Mahoney, 2011; Woodruff, 2014; Drineas & Mahoney, 2016). Specifically, primal sketching approximates the sample covariance matrix X>X/n by X>L>LX/n, where L is an m× n sketching matrix, and m < n. If L is chosen as a suitable random matrix, then this can still approximate the original sample covariance matrix. Then the primal sketched ridge regression estimator is\nβ̂p = ( X>L>LX/n+ λIp )−1 X>Y/n. (5)\nDual sketching reduces p instead. An equivalent expression for ridge regression is β̂ = n−1X> ( XX>/n+ λIn )−1 Y . Dual sketched ridge regression reduces the computation cost of the Gram matrix XX>, approximating it by XRR>X> for another sketching matrix R ∈ Rp×d (d < p), so\nβ̂d = X > (XRR>X>/n+ λIn)−1 Y/n. (6)\nThe sketching matrices R and L are usually chosen as random matrices with iid entries (e.g., Gaussian ones) or as orthogonal matrices. In this section, we study the asymptotic MSE for both orthogonal (Section 4.1) and Gaussian sketching (Section 4.2). We also mention full sketching, which performs ridge after projecting down both X and Y . In section A.11, we find its MSE. However, the other two methods have better tradeoffs, and we can empirically get better results for the same computational cost." }, { "heading": "4.1 ORTHOGONAL SKETCHING", "text": "First we consider primal sketching with orthogonal projections. These can be implemented by subsampling, Haar distributed matrices, or subsampled randomized Hadamard transforms (Sarlos, 2006). We recall that the standard Marchenko-Pastur (MP) law is the probability distribution which is the limit of the ESD of X>X/n, when the n× p matrix X has iid standard Gaussian entries, and n, p → ∞ so that p/n → γ > 0, which has an explicit density (Marchenko & Pastur, 1967; Bai & Silverstein, 2010). Theorem 4.1 (Primal orthogonal sketching). Suppose β has iid entries with Eβi = 0, Var [βi] = α2/p, i = 1, . . . , p and β is independent of X and ε. Suppose X has iid standard normal entries.\nWe compute primal sketched ridge regression (5) with an m × n orthogonal matrix L (m < n, LL> = Im). Let n, p and m tend to infinity with p/n→ γ ∈ (0,∞) and m/n→ ξ ∈ (0, 1). Then the MSE of β̂p(λ) has the limit\nM(λ) = α2\n[ (λ+ ξ − 1)2 + γ(1− ξ) ] θ2 ( γ ξ , λ ξ ) ξ2 + γσ2 ξθ1 ( γ ξ , λ ξ ) − (λ+ ξ − 1)θ2 ( γ ξ , λ ξ ) ξ2 ,\n(7) where θi(γ, λ) = ∫\n(x + λ)−idFγ(x) and Fγ is the standard Marchenko-Pastur law with aspect ratio γ.\nStructure of the proof. The proof is in Section A.6, with explicit formulas in Section A.6.1. The θi are related to the resolvent of the MP law and its derivatives. In the proof, we decompose the\nMSE as the sum of variance and squared bias, both of which further reduce to the traces of certain random matrices, whose limits are determined by the MP law Fγ and λ. The two terms on the RHS of Equation (7) are the limits of squared bias and variance, respectively. There is an additional key step in the proof, which introduces the orthogonal complement L1 of the matrix L such that L>L + L>1 L1 = In, which leads to some Gaussian random variables appearing in the proof, and simplifies calculations.\nSimulations. A simulation in Figure 3 (left) shows a good match with our theory. It also shows that sketching does not increase the MSE too much. In this case, by reducing the sample size to half the original one, we only increase the MSE by a factor of 1.05. This shows sketching can be very effective. We also see in Figure 3 (right) that variance is compromised much more than bias.\nRobustness to tuning parameter. The reader may wonder how strongly this depends on the choice of the regularization parameter λ. Perhaps ridge regression works poorly with this λ, so sketching cannot worsen it too much? What happens if we take the optimal λ instead of a fixed one? In experiments in Section A.12 we show that the behavior is quite robust to the choice of regularization parameter.\nThe next theorem states a result for dual orthogonal sketching.\nTheorem 4.2 (Dual orthogonal sketching). Under the conditions of Theorem 4.1, we compute the dual sketched ridge regression with an orthogonal p × d sketching matrix R (d 6 p, R>R = Id). Let n, p and d go to infinity with p/n→ γ ∈ (0,∞) and d/n→ ζ ∈ (0, γ). Then the MSE of β̂d(λ) has the limit\nα2\nγ\n[ γ − 1 + (λ− γ + ζ)2θ̄2(ζ, λ) + (γ − ζ)θ̄21(ζ, λ) ] + σ2 [ θ̄1(ζ, λ)− (λ+ ζ − γ)θ̄2(ζ, λ) ] ,\nwhere θ̄i(ζ, λ) = (1− ζ)/λi + ζ ∫ (x+λ)−idFζ(x), and Fζ is the standard Marchenko-Pastur law.\nProof structure and simulations. The proof in Section A.7 follows similar path to the previous one. Here θ̄i comes in because of the companion Stieltjes transform of MP law. The simulation results shown in Figure 11 agrees well with our theory. They are similar to the ones before: sketching has favorable properties, and the bias increases less than the variance.\nOptimal tuning parameters. For both primal and dual sketching, the optimal regularization parameter minimizing the MSE seems analytically intractable. Instead, we use a numerical approach in our experiments, based on a binary search. Since this is one-dimensional problem, there are no numerical issues. See Figure 13 in Section A.12.3." }, { "heading": "4.1.1 EXTREME PROJECTION — MARGINAL REGRESSION", "text": "It is of special interest to investigate extreme projections, where the sketching dimension is much reduced compared to the sample size, so m n. This corresponds to ξ = 0. This can also be viewed as a scaled marginal regression estimator, i.e., β̂ ∝ X>Y . For dual sketching, the same case can be recovered with ζ = 0. Another interest of studying this special case is that the formula for MSE simplifies a lot.\nTheorem 4.3 (Marginal regression). Under the same assumption as Theorem 4.1, let ξ = 0. Then the form of the MSE is M(λ) = [α2 [ (λ− 1)2 + γ ] + σ2γ]/λ2. Moreover, the optimal λ∗ that minimizes this equals γσ2/α2 + 1 + γ and the optimal MSE is M(λ∗) = α2 ( 1− α2/[α2(1 + γ) + γσ2] ) .\nThe proof is in Section A.8. When is the optimal MSE of marginal regression small? Compared to the MSE of the zero estimator α2, it is small when γ(σ2/α2 + 1) + 1 is large. In Figure 4 (left), we compare marginal and ridge regression for different aspect ratios and SNR. When the signal to noise ratio (SNR) α2/σ2 is small or the aspect ratio γ is large, marginal regression does not increase the MSE much. As a concrete example, if we take α2 = σ2 = 1 and γ = 0.7, the marginal MSE is 1− 1/2.4 ≈ 0.58. The optimal ridge MSE is about 0.52, so their ratio is only ca. 0.58/0.52 ≈ 1.1. It seems quite surprising that a simple-minded method like marginal regression can work so well. However, the reason is that when the SNR is small, we cannot expect ridge regression to have good performance. Large γ can also be interpreted as small SNR, where ridge regression works poorly and sketching does not harm performance too much." }, { "heading": "4.2 GAUSSIAN SKETCHING", "text": "In this section, we study Gaussian sketching. The following theorem states the bias of dual Gaussian sketching. The bias is enough to characterize the performance in the high SNR regime where α/σ → ∞, and we discuss the extension to low SNR after the proof. Theorem 4.4 (Bias of dual Gaussian sketch). Suppose X is an n × p standard Gaussian random matrix. Suppose also that R is a p × d matrix with i.i.d. N (0, 1/d) entries. Then the bias of dual sketch has the expression Bias2(β̂d) = α2 + α2/γ · [m′(z)− 2m(z)] |z=0, where m is a function described below, and m′(z) denotes the derivative of m w.r.t. z. Below, we use the branch of the square root with positive imaginary part.\nThe function m is characterized by its inverse function, which has the explicit formula m−1(z) = 1/[1 + z/ζ]− [γ + 1− √ (γ − 1)2 + 4λz]/(2z) for complex z with positive imaginary part.\nAbout the proof. The proof is in Section A.9.We mention that the same result holds when the matrices involved have iid non-Gaussian entries, but the proof is more technical. The current proof is based on free probability theory (e.g., Voiculescu et al., 1992; Hiai & Petz, 2006; Couillet & Debbah, 2011). The function m is the Stieltjes transform of the free additive convolution of a standard MP law F1/ξ and a scaled inverse MP law λ/γ · F−11/γ (see the proof).\nNumerics. To evaluate the formula, we note that m−1(m(0)) = 0, so m(0) is a root of m−1. Also, dm(0)/dz equals 1/(dm−1(y)/dy|y=m(0)), the reciprocal of the derivative of m−1 evaluated at m(0). We use binary search to find the numerical solution. The theoretical result agrees with the simulation quite well, see Figure 4.\nSomewhat unexpectedly, the MSE of dual sketching can be below the MSE of ridge regression, see Figure 4. This can happen when the original regularization parameter is suboptimal. As d grows, the MSE of Gaussian dual sketching converges to that of ridge regression.\nWe have also found the bias of primal Gaussian sketching. However, stating the result requires free probability theory, and so we present it in the Appendix, see Theorem A.1. To further validate our results, we present additional simulations in Sec. A.12, for both fixed and optimal regularization parameters after sketching. A detailed study of the computational cost for sketching in Sec. A.13 concludes, as expected, that primal sketching can reduce cost when p < n, while dual sketching can reduce it when p > n; and also provides a more detailed analysis." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors thank Ken Clarkson for helpful discussions and for providing the reference Chen et al. (2015). ED was partially supported by NSF BIGDATA grant IIS 1837992. SL was partially supported by a Tsinghua University Summer Research award. A version of our manuscript is available on arxiv at https://arxiv.org/abs/1910.02373." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 2.1", "text": "If p/n → γ and the spectral distribution of Σ converges to H , we have by the general MarchenkoPastur (MP) theorem of Rubio and Mestre (Rubio & Mestre, 2011), that\n(Σ̂ + λI)−1 (cpΣ + λI)−1,\nwhere cp := c(n, p,Σ, λ) is the unique positive solution of the fixed point equation\n1− cp = cp n\ntr [ Σ(cpΣ + λI) −1] . Here, using the terminology of the calculus of deterministic equivalents (Dobriban & Sheng, 2018), two sequences of (not necessarily symmetric) n × n matrices An, Bn of growing dimensions are equivalent, and we write\nAn Bn if limn→∞ tr [Cn(An −Bn)] = 0 almost surely, for any sequence Cn of (not necessarily symmetric) n × n deterministic matrices with bounded trace norm, i.e., such that lim sup ‖Cn‖tr < ∞ (Dobriban & Sheng, 2018). Informally, linear combinations of the entries of An can be approximated by the entries of Bn.\nWe start with\nβ̂ = ( X>X/n+ λIp )−1 X>Y/n = ( X>X/n+ λIp )−1 X>(Xβ + ε) n\n= (Σ̂ + λIp) −1Σ̂β + (Σ̂ + λIp)\n−1X >ε\nn .\nThen, by the general MP law written in the language of the calculus of deterministic equivalents\n(Σ̂ + λIp) −1Σ̂ = Ip − λ(Σ̂ + λIp)−1 Ip − λ(cpΣ + λI)−1 = cpΣ(cpΣ + λI)−1.\nBy the definition of equivalence for vectors,\n(Σ̂ + λIp) −1Σ̂β cpΣ(cpΣ + λI)−1β.\nWe note a subtle point here. The rank of the matrix M := (Σ̂ + λIp)−1Σ̂ is at most n, and so it is not a full rank matrix when n < p. In contrast, cpΣ(cpΣ + λI)−1 can be a full rank matrix. Therefore, for the vectors β in the null space of Σ̂, which is also the null space of X , we certainly have that the two sides are not equal. However, here we assumed that the matrix X is random, and so its null space is a random max(p−n, 0) dimensional linear space. Therefore, for any fixed vector β, the random matrix M will not contain it in its null space with high probability, and so there is no contradiction.\nWe should also derive an asymptotic equivalent for\n(Σ̂ + λIp) −1X\n>ε n .\nSuppose we have Gaussian noise, and let Z ∼ N (0, Ip). Then we can write\n(Σ̂ + λIp) −1X\n>ε n =d (Σ̂ + λIp) −1Σ̂1/2 σZ n1/2 .\nSo the question reduces to finding a deterministic equivalent for h(Σ̂), where h(x) = (x + λ)−2x. Note that\nh(x) = (x+ λ)−2x = (x+ λ)−2(x+ λ− λ) = (x+ λ)−1 − λ(x+ λ)−2.\nBy the calculus of determinstic equivalents: (Σ̂ + λ)−1 (cpΣ + λI)−1. Moreover, fortunately the limit of the second part was recently calculated in (Dobriban & Sheng, 2019). This used the so-called ”differentiation rule” of the calculus of deterministic equivalents to find\n(Σ̂ + λ)−2 (cpΣ + λI)−2(I − c′pΣ).\nThe derivative c′p = dcp/dz has been found in Dobriban & Sheng (2019), in the proof of Theorem 3.1, part 2b. The result is (with γp = p/n,Hp the spectral distribution of Σ, and T a random variable distributed according to Hp)\nc′p = γpEHp cpT (cpT−z)2\n−1 + γpzEHp T(cpT−z)2 . (8)\nSo, we find the final answer\n(Σ̂ + λIp) −1Σ̂1/2 A(Σ, λ) := (cpΣ + λI)−1 − λ(cpΣ + λI)−2(I − c′pΣ)." }, { "heading": "A.2 RISK ANALYSIS", "text": "Figure 5 shows a simulation result. We see a good match between theory and simulation." }, { "heading": "A.2.1 PROOF OF THEOREM 2.2", "text": "Proof. The MSE of β̂ has the form\nE‖β̂ − β‖2 = bias2 + δ2,\nwhere\nbias2 = E [∥∥∥(X>X/n+ λIp)−1X>X/nβ − β∥∥∥2\n2\n] ,\nδ2 = σ2E [∥∥∥(X>X/n+ λIp)−1 n−1X>∥∥∥2\nF\n] .\nWe assume that X has iid entries of zero mean and unit variance, and that Eβ = 0, Var [β] = α2/pIp. As p/n→ γ as n goes to infinity, the ESD of 1nX\n>X converges to the MP law Fγ . So we have\nbias2 = E [∥∥∥λ (X>X/n+ λIp)−1 β∥∥∥2\n2 ] = α2λ2E [ 1\np tr[ ( X>X/n+ λIp )−2 ] ] → α2λ2 ∫ 1 (x+ λ)2 dFγ(x),\nand\nδ2 = σ2 n2 E [ tr[ ( X>X/n+ λIp )−2 X>X] ] = σ2 n E [ tr[ ( X>X/n+ λIp\n)−1 − λ (X>X/n+ λIp)−2]] → σ2γ [∫ 1\nx+ λ dFγ(x)− λ\n∫ 1\n(x+ λ)2 dFγ(x)\n] .\nDenoting θi(γ, λ) = ∫ 1 (x+λ)i dFγ(x), then\nAMSE(β̂) = α2λ2θ2 + γσ 2[θ1 − λθ2]. (9)\nFor the standard Marchenko-Pastur law (i.e., when Σ = Ip), we have the explicit forms of θ1 and θ2. Specifically,\nθ1 =\n∫ 1\nx+ λ dFγ(x) = −\n1\n2\n[ 2(1 + λ)\nλγ + 2 √ γλ z2 ] where\nz2 = − 1\n2\n[ ( √ γ +\n1 + λ √ γ ) +\n√ ( √ γ +\n1 + λ √ γ )2 − 4\n] .\nIt is known that the limiting Stieltjes transform mFγ := mγ of Σ̂ has the explicit form (Marchenko & Pastur, 1967):\nmγ(z) = (z + γ − 1) + √ (z + γ − 1)2 − 4zγ −2zγ .\nAs usual in the area, we use the principal branch of the square root of complex numbers. Hence\nθ1 = (−λ+γ−1)+ √ (−λ+γ−1)2+4λγ 2λγ . Also\nθ2(γ, λ) =\n∫ 1\n(x+ λ)2 dFγ(x) = −\n∫ d\ndλ\n1\nx+ λ dFγ(x)\n= − d dλ θ1 = − 1 γλ2 + 1 √ γ d dλ z2 λ\n= − 1 γλ2 + γ + 1 2γλ2 − 1 2 √ γ [ λ+ γ + 1 γλ √\n( √ γ + 1+λ√γ )\n2 − 4 −\n√ ( √ γ + 1+λ√γ ) 2 − 4\nλ2 ]\nFor the residual, E [ 1\nn ‖Y −Xβ̂‖22|X\n] = α2λ2 1 p tr[ ( X>X/n+ λIp )−1 − λ (X>X/n+ λIp)−2] + σ2 1\nn [tr(In)− 2 tr\n( X>X/n+ λIp )−1 X>X/n+ tr (( X>X/n+ λIp )−1 X>X/n )2 ].\nNext, E [ 1 p tr[ (( X>X/n+ λIp )−1 X>X/n )2 ] ] = E [ 1 p tr[ ( Ip − λ ( X>X/n+ λIp )−1)2 ] ] → 1− 2λθ1 + λ2θ2.\nTherefore E [ 1\nn ‖Y −Xβ̂‖22\n] →α2λ2[θ1 − λθ2] + σ2 [ 1− 2γ(1− λθ1) + γ(1− 2λθ1 + λ2θ2) ] = α2λ2[θ1 − λθ2] + σ2 [ 1− γ(1 + λθ1 − λ2θ2) ] ." }, { "heading": "A.3 BIAS-VARIANCE TRADEOFF", "text": "The limiting MSE decomposes into a limiting squared bias and variance. The specific forms of these are\nbias2 = α2 ∫\nλ2\n(x+ λ)2 dFγ(x), var = γσ\n2\n∫ x\n(x+ λ)2 dFγ(x).\nSee Figure 1 for a plot. We can make several observations.\n1. The bias increases with λ, starting out at zero for λ = 0 (linear regression), and increasing to α2 as λ→∞ (zero estimator).\n2. The variance decreases with λ, from γσ2 ∫ x−1dFγ(x) to zero.\n3. In the setting plotted in the figure, when α2 and σ2 are roughly comparable, there are additional qualitative properties we can investigate. When γ is small, the regularization parameter λ influences the bias more strongly than the variance (i.e., the derivative of the normalized quantities in the range plotted is generally larger for the normalized squared bias). In contrast when γ is large, the variance is influenced more.\nNext we consider how bias and variance change with γ at the optimal λ∗ = γσ2/α2. This can be viewed as the ”pure” effects of dimensionality on the problem, keeping all other parameters fixed. Ineed, α2/σ2 can be viewed as the signal-to-noise ratio (SNR), and is fixed. This analysis allows us to study for the best possible estimator (ridge regression, a Bayes estimator), behaves with the dimension. We refer to Figure 6, where we make some specific choices of α and σ.\n1. Clearly the overall risk increases, as the problem becomes harder with increasing dimension. This is in line with our intuition.\n2. The classical bias-variance tradeoff can be summarized by the equation\nbias2(λ) + var(λ) >M∗(α, γ),\nwhere we made explicit the dependence of the bias and variance on λ, and whereM∗(α, γ) is the minimum MSE achievable, also known as the Bayes error, for which there are explicit formulas available (Tulino & Verdú, 2004; Dobriban & Wager, 2018).\n3. The variance first increases, then decreases with γ. This shows that in the ”classical” low-dimensional case, most of the risk is due to variance, while in the ”modern” highdimensional case, most of it is due to bias. This observation is consistent with other phenomena in proportional-limit asymptotics, for instance that the map between population and sample eigenvalue distributions is asymptotically deterministic (Marchenko & Pastur, 1967; Bai & Silverstein, 2010)." }, { "heading": "A.4 SIMULATIONS WITH CROSS-VALIDATION", "text": "See Figure 7. We consider both small and large γ. Our bias-correction procedure shrinks the λ to the correct direction and decreases the test error. It is also shown that the one-standard-error rule (e.g., Hastie et al., 2009) does not perform well here.\nK λ\n∗\nCV . So the bias-correction decreases the test error\nby about 0.003. Right: we take n = 200, p = 1000, γ = 5, α = 3, σ = 1. The bias-correction decreases the test error from 8.92 to 8.89, so it decreases by 0.03." }, { "heading": "A.5 CHOOSING THE REGULARIZATION PARAMETER- ADDITIONAL DETAILS", "text": "Another possible prediction method is to use the average of the ridge estimators computed during cross-validation. Here it is also natural to use the CV-optimal regularization parameters, averaging β̂−k(λ̂ ∗ k), i.e.\nβ̂avg(λ̂ ∗ k) =\n1\nK K∑ k=1 β̂−k(λ̂ ∗ k).\nThis has the advantage that it does not require refitting the ridge regression estimator, and also that we use the optimal regularization parameter." }, { "heading": "A.5.1 TRAIN-TEST VALIDATION", "text": "The same bias in the regularization parameter also applies to train-test validation. Since the number of samples is changed when restricting to the training set, the optimal λ chosen by train-test validation is also biased for the true regularization parameter minimizing the test error. We will later see in simulations (Figure 8) that retraining the ridge regression estimator on the whole data will still significantly improve the performance (this is expected based on our results on CV). For prediction, here we can also use ridge regression on the training set. This effectively reduces sample size n → ntrain, where ntrain is the sample size of the training set. However, if the training set grows such that n/ntrain → 1 while ntrain → ∞, the train-test split has asymptotically optimal performance." }, { "heading": "A.5.2 LEAVE-ONE-OUT", "text": "There is a special “short-cut” for leave-one-out in ridge regression, which saves us from burdensome computation. Write loo(λ) for the leave-one-out estimator of prediction error with parameter λ. Instead of doing ridge regression n times, we can calculate the error explicitly as\nloo(λ) = 1\nn n∑ i=1\n[ Yi −X>i β̂(λ)\n1− Sii(λ)\n]2 .\nwhere S(λ) = X(X>X + nλI)−1X>. The minimizer of loo(λ) is asymptotically optimal, i.e., it converges to λ∗ (Hastie et al., 2019). However, the computational cost of this shortcut is the same as that of a train-test split. Therefore, the method described above has the same asymptotic performance.\nSimulations: Figure 8 shows simulation results comparing different cross-validation methods:\n1. kf — k-fold cross-validation by taking the average of the ridge estimators at the CV-optimal regularization parameter.\n2. kf refit — k-fold cross-validation by refitting ridge regression on the whole dataset using the CV-optimal regularization parameter. 3. kf bic — k-fold cross-validation by refitting ridge regression on the whole dataset using the CV-optimal regularization parameter, with bias correction. 4. tt — train-test validation, by using the ridge estimator computed on the train data, at the validation-optimal regularization parameter. Note: we expect this to be similar, but worse than the ”kf” estimator. 5. tt refit — train-test validation by refitting ridge regression on the whole dataset, using the validation-optimal regularization parameter. Note: we expect this to be similar, but slightly worse than the ”kf refit” estimator. 6. tt bic — train-test validation by refitting ridge regression on the whole dataset using the CV-optimal regularization parameter, with bias correction. 7. loo — leave-one-out\nFigure 8 shows that the naive estimators (kf and tt) can be quite inaccurate without refitting or bias correction. However, if we either refit or bias-correct, the accuracy improves. In this case, there seems to be no significant difference between the various methods." }, { "heading": "A.6 PROOF OF THEOREM 4.1", "text": "Proof. Suppose m/n→ ξ as n goes to infinity. For β̂p, we have bias2 = E [∥∥∥(X>L>LX/n+ λIp)−1X>X/nβ − β∥∥∥2\n2\n] ,\nδ2 = σ2E [∥∥∥(X>L>LX/n+ λIp)−1 n−1X>∥∥∥2\nF\n] .\nDenote M = ( X>L>LX/n+ λIp )−1 , the resolvent of the sketched matrix. We further assume that X has iid N (0, 1) entries and LL> = Im. Let L1 be an orthogonal complementary matrix of L, such that L>L+ L>1 L1 = In. We also denote N = X>L>1 L1X n . Then\nMX>X/n =M X>L>LX +X>L>1 L1X\nn = Ip − λM +MN.\nTherefore, using that Cov [β] = α2/p · Ip, we find the bias as\nbias2 = α2 p E [ tr(M − Ip)(M> − Ip) ] = α2\np\n{ λ2E [ tr[M2] ] + E [ trM2 (X>L>1 L1X) 2\nn2\n] − 2λE [ trM2N ]} .\nBy the properties of Wishart matrices (e.g., Anderson, 2003; Muirhead, 2009), we have\nE [N ] = n−m n Ip,\nE [ (N)2 ] = 1 n2 E [ Wishart(Ip, n−m)2 ] = 1 n2 [n−m+ p(n−m) + (n−m)2]Ip.\nRecalling that m,n→∞ such that m/n→ ξ, and that θi(γ, λ) = ∫ (x+ λ)−idFγ(x),\nbias2 = α2\np\n[ λ2 + n−m+ p(n−m) + (n−m)2\nn2 − 2λn−m n\n] E [ tr[M2] ] → α2[(λ+ ξ − 1)2 + γ(1− ξ)]θ2(γ, ξ, λ).\nMoreover,\nδ2 = σ2 n2 E [ tr[M2X>X] ] = σ2 n · { E [tr[M ]]− λE [ tr[M2] ] + E [ tr[M2N ]\n]} → γσ2[θ1(γ, ξ, λ)− λθ2(γ, ξ, λ) + (1− ξ)θ2(γ, ξ, λ)].\nHere we used the additional definitions\nθi(γ, ξ, λ) =\n∫ 1\n(ξx+ λ)i dFγ/ξ(x)\nθi(γ, λ) = θi(γ, ξ = 1, λ).\nNote that these can be connected to the previous definitions by\nθ1(γ, ξ, λ) = 1\nξ\n∫ 1\nx+ λ/ξ dFγ/ξ(x) =\n1 ξ θ1\n( γ\nξ , λ ξ ) θ2(γ, ξ, λ) = 1\nξ2 θ2\n( γ\nξ , λ ξ\n) .\nTherefore the AMSE of β̂p is\nAMSE(β̂p) = α 2[(λ+ ξ − 1)2 + γ(1− ξ)]θ2(γ, ξ, λ) + γσ2[θ1(γ, ξ, λ)− (λ+ ξ − 1)θ2(γ, ξ, λ)]\n= α2[(λ+ ξ − 1)2 + γ(1− ξ)] 1 ξ2 θ2\n( γ\nξ , λ ξ ) + γσ2 [ 1\nξ θ1\n( γ\nξ , λ ξ\n) − (λ+ ξ − 1) 1\nξ2 θ2\n( γ\nξ , λ ξ\n)] . (10)\nA.6.1 ISOTROPIC CASE\nConsider the special case where Γ = I , that is, X has iid N (0, 1) entries. Then Fγ is the standard MP law, and we have the explicit forms for θi = θi(γ, λ) =\n∫ 1\n(x+λ)i dFγ :\nθ1(γ, λ) = − 1 + λ\nγλ +\n1\n2 √ γλ\n[ √ γ + 1 + λ √ γ +\n√ ( √ γ +\n1 + λ √ γ )2 − 4],\nθ2(γ, λ) = − 1 γλ2 + γ + 1 2γλ2 − 1 2 √ γ ( λ+ 1 γ + 1)\n1 λ √\n( √ γ + 1+λ√γ ) 2 − 4 +\n1\n2 √ γ\n√ ( √ γ +\n1 + λ √ γ )2 − 4 1 λ2 ,\nθ̄1(ζ, λ) = ζθ1(ζ, λ) + 1− ζ λ , θ̄2(ζ, λ) = ζθ2(ζ, λ) + 1− ζ λ2 ,\nThe results are obtained by the contour integral formula∫ f(x)dFγ(x) = − 1\n4πi ∮ |z|=1 f(|1 + γz|2)(1− z2)2 z2(1 + √ γz)(z + √ γ) dz.\nSee Proposition 2.10 of Yao et al. (2015)." }, { "heading": "A.7 PROOF OF THEOREM 4.2", "text": "Proof. Suppose d/p→ ζ as n goes to infinity. For β̂d, we have bias2 = E [∥∥∥n−1X> (XRR>X>/n+ λIn)−1Xβ − β∥∥∥2\n2\n] ,\nδ2 = σ2 tr[ ( XRR>X>/n+ λIn )−2 XX> n2 ].\nDenote M = ( XRR>X>/n+ λIn )−1 . Note that, using that Cov [β] = α2/p · Ip\nbias2 = α2 p E [ tr[MXX>/n]2 ] − 2α 2 p E [ tr[MXX>/n] ] + α2 p tr(Ip).\nMoreover, lettingR1 to be an orthogonal complementary matrix ofR, such thatRR>+R1R>1 = In, and N = XR1R > 1 X >\nn ,\nE [ 1\np tr[MXX>/n]\n] = 1\np tr[In − λE [tr[M ]] + E [MN ]]\n→ 1 γ − λ γ\n∫ 1\nx+ λ dF̄ζ(x) + γ − ζ γ\n∫ 1\nx+ λ dF̄ζ(x),\nwhere F̄ζ is the companion MP law, that is, F̄ζ = (1 − γ)δ0 + γFζ . The third term calculated by using that XR and XR1 are independent for a Gaussian random matrix X , so that M,N are independent, and that E [N ] = p−dn In. Thus\nE [ 1\np tr[MXX>/n]\n] → 1\nγ − λ+ ζ − γ γ θ̄1(ζ, λ)\n= 1 γ − λ+ ζ − γ γ [ 1− ζ λ + ζθ1(ζ, λ) ] .\nThen E [ 1\np tr[MXX>/n]2\n] = 1 p E [ tr[In + λ 2M2 +MNMN − 2λM + 2MN − λM2N − λMNM ] .\nNote that\nE [MNMN |M ] = M [(p− d)(M> + tr(M)In) + (p− d)2M ]/n2\n= p− d+ (p− d)2\nn2 M2 + p− d n2 tr(M)M,\nso\nE [ 1\np tr[MXX>/n]2\n] → 1\nγ [1 + (λ2 − 2λ(γ − ζ) + (γ − ζ)2)θ̄2(ζ, λ)\n+ 2(γ − ζ − λ)θ̄1(ζ, λ) + (γ − ζ)θ̄21(ζ, λ)]. Thus we find the following exprssion for the limiting squared bias:\nbias2 → α 2\nγ [γ − 1 + (λ− γ + ζ)2θ̄2 + (γ − ζ)θ̄21].\nWith similar calculations (that we omit for brevity), we can find\nδ2 → σ2(θ̄1(ζ, λ)− (λ+ ζ − γ)θ̄2(ζ, λ)).\nTherefore the AMSE of β̂d is\nAMSE = α2\nγ [γ − 1 + (λ− γ + ζ)2θ̄2 + (γ − ζ)θ̄21] + σ2[θ̄1(ζ, λ)− (λ+ ζ − γ)θ̄2(ζ, λ)].\n(11)" }, { "heading": "A.8 PROOF OF THEOREM 4.3", "text": "Proof. Recall that we have m,n → ∞, such that m/n → ξ. Then we need to take ξ → 0. However, we find it more convenient to do the calculation directly from the finite sample results as m,n, p → ∞ with m/n → 0, p/n → γ, It is not hard to check that computing the results in the other way (i.e., interchanging the limits), leads to the same results. Starting from our bias formula for primal sketching, we first get\nbias2 = α2\np\n[ λ2 + n−m+ p(n−m) + (n−m)2\nn2 − 2λn−m n\n] E [ tr[ ( X>L>LX/n+ λIp )−2 ] ]\n→ α2[(λ− 1)2 + γ]/λ2.\nThe limit of the trace term is not entirely trivial, but it can be calculated by (1) observing that the m × p sketched data matrix P = LX has iid normal entries (2) thus the operator norm of P>P/n vanishes, (3) and so by a simple matrix perturbation argument the trace concentrates around p/λ2. This gives the rough steps of finding the above limit. Moreover,\nδ2 = σ2 n2 E [ tr[ ( X>L>LX/n+ λIp )−2 X>X] ] → γσ2/λ2 · EFγX2 = γσ2/λ2\nSo the MSE is M(λ) = α2[(λ − 1)2 + γ]/λ2 + σ2 · γ/λ2. From this it is elementary to find the optimal λ and its objective value." }, { "heading": "A.9 PROOF OF THEOREM 4.4", "text": "Proof. Note that the bias can be written as\nbias2 = α2\np E\n[ tr[ ( XRR>X>\nnd + λIn\n)−1 XX>\nnd ]2\n]\n− 2α 2 p E [ tr[ ( XRR>X>/n+ λIn )−1 XX>/n] ] + α2.\nWrite G = XX>. Since RR> ∼ Wp(Ip, d), we have XRR>X> ∼ Wn(G, d). So XRR>X> d = G1/2WG1/2, where W ∼ Wn(In, d).\nE [ tr[ ( XRR>X>\nnd + λIn\n)−1 XX>/n] ] = E [ tr[(G1/2WG1/2/d+ nλIn) −1G] ]\n= E [ tr[( W\nd + λ(\nG n )−1)−1]\n] .\nSo we need to find the law of Wd + λ γ ( G p ) −1. Suppose first that G = XX> ∼ Wn(In, p). Then W and G−1 are asymptotically freely independent. The l.s.d. of W/d is the MP law F1/ξ while the l.s.d. of G/p is the MP law F1/γ . We need to find the additive free convolution W Ḡ, where Ḡ = λγG −1.\nRecall that the R-transform of a distribution F is defined by\nRF (z) = m −1 F (−z)−\n1 z ,\nwhere m−1F (z) is the inverse function of the Stieltjes transform of F (e.g., Voiculescu et al., 1992; Hiai & Petz, 2006; Couillet & Debbah, 2011). We can find the R-transform by solving\nmF (RF (z) + 1\nz ) = −z.\nNote that the R-transform of W/d is\nRW (z) = 1\n1− z/ξ .\nThe Stieltjes transform of G−1 is\nmG−1(z) =\n∫ 1\n1/x− z dF1/γ(x) = −\n1 z − 1 z2 m1/γ( 1 z )\n= −1 z −\n1− 1γ − 1 z + √ (1 + 1γ + 1 z )\n2 − 4γ 2 zγ\n= − 1 + 1γ − 1 z +\n√ (1 + 1γ − 1 z )\n2 − 4γ 2 zγ .\nThen the R-transform of G−1 is\nRG−1(z) = − 1 z + γ + 1−\n√ (γ + 1)2 − 4γ(z + 1)\n2z\n= γ − 1− √ (γ − 1)2 − 4γz 2z .\nSince we have the property that Raµ(z) = aRµ(az),\nRḠ = Rλ γG −1(z) =\nγ − 1− √\n(γ − 1)2 − 4λz 2z .\nHence we have\nRW Ḡ = RW +RḠ = 1 1− z/ξ + γ − 1− √ (γ − 1)2 − 4λz 2z .\nMoreover, the Stieltjes transform of µ = W Ḡ satisfies\nm−1µ (z) = m −1 W Ḡ(z) = RF (−z)−\n1 z =\n1 1 + z/ξ + γ − 1− √ (γ − 1)2 + 4λz −2z − 1 z .\nNote that\n2 α2\np E\n[ tr[ ( XRR>X>\nnd + λIn\n)−1 XX>/n] ] → 2α 2 γ Eµ [ 1 x ] = 2 α2 γ lim z→0 m(z),\nα2\np E\n[ tr[ ( XRR>X>\nnd + λIn\n)−1 XX>\nnd ]2\n] → α 2 γ Eµ [ 1 x2 ] = α2 γ lim z→0 d dz m(z).\nSo it suffices to find m(z) and ddzm(z) evaluated at zero.\nThis result can characterize the performance of sketching in the high SNR regime, where α σ. To understand the lower SNR regime, we need to study the variance, and thus we need to calculate\nvar = σ2 1\nn E\n[ tr[ ( XRR>X>\nnd + λIn\n)−2 XX>/n] ] = σ2E [ tr[( W\nd + λ γ ( G p )−1)−2G−1] ] where G = XX> ∼ Wn(In, p) is a Wishart distribution, and XRR>X> =d G1/2WG1/2, with W ∼ Wn(In, r). This seems to be quite challenging, and we leave it to future work." }, { "heading": "A.10 RESULTS FOR PRIMAL GAUSSIAN SKETCHING", "text": "The statement requires some notions from free probability, see e.g., Voiculescu et al. (1992); Hiai & Petz (2006); Nica & Speicher (2006); Anderson et al. (2010); Couillet & Debbah (2011) for references .\nTheorem A.1 (Bias of primal Gaussian sketch). Suppose X is an n× p standard Gaussian random matrix. Suppose also that L is a d× n matrix with i.i.d. N (0, 1/d) entries. Then the bias of primal sketch has the expressionMSE(β̂p) = α2 + α 2 γ [τ((a+b) −1b(a+b)−1b−1)−2τ((a+b)−1)], where a and b two free random variables, that are freely independent in a non-commutative probability space, and τ is their trace. Specifically, the law of a is the MP law F1/ξ and b = λγ b̃, where the law\nof b̃ is the MP law F1/γ .\nProof of Theorem A.1. Note that bias2 = E [∥∥∥(X>L>LX/(nd) + λIp)−1 (X>X/n)β − β∥∥∥2\n2\n] ,\nand ( X>L>LX/(nd) + λIp )−1 X> = X>(L>LXX>/(nd) + λIn) −1. Thus\nbias2 = E [∥∥∥∥X>(L>LXX>nd + λIn)−1Xn β − β ∥∥∥∥2\n2\n]\n= α2 + α2\np E[tr[(\nXX>L>L\nnd + λIn)\n−1XX > n ( L>LXX> nd + λIn) −1XX > n ]\n− 2 tr[(L >LXX>\nnd + λIn)\n−1XX >\nn ]].\nFirst we find the l.s.d. of (L >LXX> nd + λIn) −1XX> n . Write W = L >L, G = XX>. Then\n( L>LXX>\nnd + λIn)\n−1XX >\nn = (\nWG\nnd + λIn)\n−1G\nn = G−1(\nW\nd + λ(\nG n )−1)−1G,\nwhich is similar to (Wd + λ( G n ) −1)−1. So it suffices to find the l.s.d. of (Wd + λ γ ( G p ) −1)−1.\nBy the definition, W ∼ Wn(In, d), G ∼ Wn(In, p), therefore the l.s.d. of W/d converges to the MP law F1/ξ and the l.s.d. of G/p converges to the MP law F1/γ .\nAlso note that\n( XX>L>L\nnd + λIn)\n−1XX > n ( L>LXX> nd + λIn) −1XX > n = ( W d + λnG−1)−1G−1( W d + λnG−1)−1G.\nWe write A = Wd , B = λ γ ( G p ) −1. Then it suffices to find\nα2 p E [ tr[(A+B)−1B(A+B)−1B−1] ] .\nWe will find an expression for this using free probability. For this we will need to use some series expansions. There are two cases, depending on whether the operator norm of BA−1 is less than or greater than unity, leading to different series expansions. We will work out below the first case, but the second case is similar and leads to the same answer.\ntr[(A+B)−1B(A+B)−1B−1] = tr[A−1(I +BA−1)−1BA−1(I +BA−1)−1B−1]\nSince the operator norm of BA−1 is less unity, we have the von Neumann series expansion\n[I +BA−1]−1 = ∞∑ i=0 (−BA−1)i,\nthen we have tr[(A+B)−1B(A+B)−1B−1] = ∑ i,j≥0 (−1)i+j tr[(BA−1)i+j+1B−1A−1]\n= ∑ i,j≥0 (−1)i+j tr[(A−1B)i+j+1A−1B−1].\nSinceA andB are asymptotically freely independent in the free probability space arising in the limit (e.g., Voiculescu et al., 1992; Hiai & Petz, 2006; Couillet & Debbah, 2011), and the polynomial (a−1b)i+j+1a−1b−1 involves an alternating sequence of a, b, we have\n1 n tr[(A−1B)i+j+1A−1B−1]→ τ [(a−1b)i+j+1a−1b−1],\nwhere a and the b are free random variables and τ is their law. Specifically, a is a free random variable with the MP law F1/ξ and b is λγ b̃\n−1, where b̃ is a free r.v. with MP law F1/γ . Moreover, they are freely independent.\nHence, we have\n1 n tr[(A+B)−1B(A+B)−1B−1]→ τ [ ∑ i≥0 (−1)i(a−1b)i+1 ∑ j≥0 (−1)j(a−1b)ja−1b−1]\n= τ [(a−1b)(1 + a−1b)−1(1 + a−1b)−1a−1b−1]\n= τ [(a+ b)−1b(a+ b)−1b−1].\nTherefore,\nbias2 → α2 + α 2\nγ\n[ 1\nn tr[(A+B)−1B(A+B)−1B−1 − 2 tr[A+B]−1] ] = α2 + α2\nγ [τ((a+ b)−1b(a+ b)−1b−1)− 2τ((a+ b)−1)]." }, { "heading": "A.11 RESULTS FOR FULL SKETCHING", "text": "The full sketch estimator projects down the entire data, and then does ridge regression on the sketched data. It has the form\nβ̂f = ( X>L>LX/n+ λIp )−1 X>L>LY n .\nWe have\nbias2 = α2λ2 ∫\n1\nξx+ λ dFγ/ξ(x) = α\n2λ2 1\nξ2 θ2\n( γ\nξ , λ ξ ) var = σ2γ [∫ 1\nξx+ λ dFγ/ξ(x)− λ\n∫ 1\n(ξx+ λ)2 dFγ/ξ(x) ] = σ2γ [ 1\nξ θ1\n( γ\nξ , λ ξ\n) − λ 1\nξ2 θ2\n( γ\nξ , λ ξ\n)] ,\ntherefore\nAMSE(β̂f ) = α 2λ2\n1\nξ2 θ2\n( γ\nξ , λ ξ\n) + σ2γ [ 1\nξ θ1\n( γ\nξ , λ ξ\n) − λ 1\nξ2 θ2\n( γ\nξ , λ ξ\n)]\nThe optimal λ for full sketch is always λ∗ = γσ 2\nα2 , the same as ridge regression. Some simulation results are shown in Figure 10, and they show the expected shape (e.g., they decrease with ξ)." }, { "heading": "A.12 NUMERICAL RESULTS", "text": "" }, { "heading": "A.12.1 DUAL ORTHOGONAL SKETCHING", "text": "See Figure 11 for additional simulation results for dual orthogonal sketching." }, { "heading": "A.12.2 PERFORMANCE AT A FIXED REGULARIZATION PARAMETER", "text": "First we fix the regularization parameter at the optimal value for original ridge regression. The results are visualized in Figure 12. On the x axis, we plot the reduction in sample size m/n for primal sketch, and the reduction in dimension d/p for dual sketch. In this case, primal and dual sketch will increase both bias and variance, and empirically in the current case, dual sketch increases them more. So in this particular case, primal sketch is preferred." }, { "heading": "A.12.3 PERFORMANCE AT THE OPTIMAL REGULARIZATION PARAMETER", "text": "We find the optimal regularization parameter λ for primal and dual orthogonal sketching. Then we use the optimal regularization parameter for all settings, see Figure 13. Both primal and dual sketch increase the bias, but decrease the variance. It is interesting to note that, for equal parameters ξ and\nζ, and in our particular case, dual sketch has smaller variance, but larger bias. So primal sketch is preferred bias or MSE is important, but dual sketch is more desired when one wants smaller variance. All in all, dual sketch has larger MSE than primal sketch in the current setting. It can also be seen that in this specific example, the optimal λ for primal sketch is smaller than that of dual sketch. However these results are hard to interpret, because there is no natural correspondence between the two parameters ξ and ζ." }, { "heading": "A.13 COMPUTATIONAL COMPLEXITY", "text": "Since sketching is a method to reduce computational complexity, it is important to discuss how much computational efficiency we gain. Recall our three estimators\nβ̂ = ( X>X/n+ λIp )−1 X>Y/n = n−1X> ( XX>/n+ λIn )−1 Y,\nβ̂p = ( X>L>LX/n+ λIp )−1 X>Y/n,\nβ̂d = n −1X> ( XRR>X>/n+ λIn )−1 Y,\nTheir computational complexity, when computed in the usual way, is:\n• No sketch (Standard ridge): if p < n, computing X>Y and X>X requires O(np) and O(np2) flops, then solving the linear equation (X>X/n + λIp)β̂ = X>Y/n requires O(p3) flops by the LU decomposition. It is O(np2) flops in total.\nIf p > n, we use the second formula for β̂, and the total flops is O(pn2). • Primal sketch: for the Hadamard sketch (and other sketches based on the FFT), computing LX by FFT requires mp log n, computing (LX)>LX requires mp2, so the total flops is O(p3 +mp(log n+ p)). So the primal sketch can reduce the computation cost only when p < n.\n• Dual sketch: computing XRR>X> requires nd (log p + n) flops by FFT, solving (XRR>X>/n+ λIn)\n−1 Y requires O(n3) flops, the matrix-vector multiplication of X> and (XRR>X>/n+λIn)−1Y requiresO(np) flops, so the total flops isO(n3+nd(log p+ n)). Dual sketching can reduce the computation cost only when p > n." } ]
2,020
null
SP:d5ccf8fdd029c2a99dac0441385f280ed3fc03fb
[ "The authors extended the regular convolution and proposed spatially shuffled convolution to use the information outside of its RF, which is inspired by the idea that horizontal connections are believed to be important for visual processing in the visual cortex in biological brain. The authors proposed ss convolution for regular convolution and group convolution. The authors tested the proposed ss convolution on multiple CNN models and show improvement of results. Finally, detailed analysis of spatial shuffling and ablation study was conducted.", "In this paper, the authors proposed a shuffle strategy for convolution layers in convolutional neural networks (CNNs). Specifically, the authors argued that the receptive field (RF) of each convolutional filter should be not constrained in the small patch. Instead, it should also cover other locations beyond the local patch and also the single channel. Based on this motivation, the authors proposed a spatial shuffling layer which is aimed at shuffling the original feature responses. In the experimental results, the authors evaluated the proposed ss convolutional layer on CIFAR-10 and ImageNet-1k and compared with various baseline architectures. Besides, the authors further did some ablated analysis and visualizations for the proposed ss convolutional layer." ]
Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed. In the view of the regular convolution’s RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information. However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons. In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution). In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation. We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs.
[ { "affiliations": [], "name": "SPATIAL SHUFFLING" } ]
[ { "authors": [ "Ossama Abdel-Hamid", "Abdel-Rahman Mohamed", "Hui Jiang", "Li Deng", "Gerald Penn", "Dong Yu" ], "title": "Convolutional neural networks for speech recognition", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2014 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Hiroshi Fukui", "Tsubasa Hirakawa", "Takayoshi Yamashita", "Hironobu Fujiyoshi" ], "title": "Attention branch network: Learning of attention mechanism for visual explanation", "venue": null, "year": 2019 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "venue": "Biological Cybernetics,", "year": 1980 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N. Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "PMLR,", "year": 2017 }, { "authors": [ "Charles D. Gilbert", "Wu Shyong Li" ], "title": "Top-down influences on visual processing", "venue": "Nature Reviews Neuroscience,", "year": 2013 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "M. Holschneider", "R. Kronland-Martinet", "J. Morlet", "Ph" ], "title": "Tchamitchian. A real-time algorithm for signal analysis with the help of the wavelet transform. Wavelets: Time-Frequency Methods and Phase Space, pp", "venue": null, "year": 1989 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "David H. Hubel", "Torsten N. Wiesel" ], "title": "Receptive fields of single neurons in the cat’s striate cortex", "venue": "Journal of Physiology,", "year": 1959 }, { "authors": [ "Maria Iacaruso", "Ioana Gasler", "Sonja Hofer" ], "title": "Synaptic organization of visual space in primary visual cortex", "venue": null, "year": 2017 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": null, "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": null, "year": 2012 }, { "authors": [ "Yann Lecun", "Lon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Wu Li", "Charles Gilbert" ], "title": "Global contour saliency and local colinear interactions", "venue": "Journal of neurophysiology,", "year": 2002 }, { "authors": [ "Wenjie Luo", "Yujia Li", "Raquel Urtasun", "Richard Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": null, "year": 2016 }, { "authors": [ "J.I. Nelson", "B.J. Frost" ], "title": "Intracortical facilitation among co-oriented, co-axially aligned simple cells in cat striate cortex", "venue": "Experimental Brain Research,", "year": 1985 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS Workshop,", "year": 2017 }, { "authors": [ "M Pettet", "Charles Gilbert" ], "title": "Dynamic changes in receptive-field size in cat primary visual cortex", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1992 }, { "authors": [ "Kerstin Schmidt", "Rainer Goebel", "Siegrid Lwel", "Wolf Singer" ], "title": "The perceptual grouping criterion of colinearity is reflected by anisotropies of connections in the primary visual cortex", "venue": "European Journal of Neuroscience,", "year": 2006 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollr", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Fisher Yu", "Vladlen Koltun" ], "title": "Multi-scale context aggregation by dilated convolutions", "venue": null, "year": 2016 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": null, "year": 2017 }, { "authors": [ "Ying Zhang", "Mohammad Pezeshki", "Philemon Brakel", "Saizheng Zhang", "César Laurent", "Yoshua Bengio", "Aaron Courville" ], "title": "Towards end-to-end speech recognition with deep convolutional neural networks", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "INCORPORATING HORIZONTAL CONNECTIONS IN CONVOLUTION BY SPATIAL SHUFFLING\nAnonymous authors Paper under double-blind review\nConvolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed. In the view of the regular convolution’s RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF. As a result, the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information. However, in lower layers of the biological brain, the information outside of the RF changes the properties of neurons. In this work, we extend the regular convolution and propose spatially shuffled convolution (ss convolution). In ss convolution, the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation. We perform experiments on CIFAR-10 and ImageNet-1k dataset, and show that ss convolution improves the classification performance across various CNNs." }, { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Networks (CNNs) and their convolution layers (Fukushima, 1980; Lecun et al., 1998) are inspired by the finding in cat visual cortex (Hubel & Wiesel, 1959) and they show the strong performance in various domains such as image recognition (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), natural language processing (Gehring et al., 2017), and speech recognition (Abdel-Hamid et al., 2014; Zhang et al., 2016). A notable characteristic of the convolution layer is the Receptive Field (RF), which is the particular input region where a convolutional output is affected by. The units (or neurons) in higher layers have larger RF by bundling the outputs of the units in lower layers with smaller RF. Thanks to the hierarchical architectures of CNNs, the units in high layers are able to capture the global context even though the units in low layers only see the local information.\nIt is known that neurons in the primary visual cortex (i.e., V1 which is low layers) change the selfproperties (e.g., the RF size (Pettet & Gilbert, 1992) and the facilitation effect (Nelson & Frost, 1985)) based on the information outside of the RF (D.Gilbert, 1992). The mechanism is believed to originate from (1) feedbacks from the higher-order area (Iacaruso et al., 2017) and (2) intracortical horizontal connections (D.Gilbert, 1992). The feedbacks from the higher-order area convey broader-contextual information than the neurons in V1, which allows the neurons in V1 to use the global context. For instance, Gilbert & Li (2013) argued that the feedback connections work as attention. Horizontal connections allow the distanced neurons in the layer to communicate with each other and are believed to play an important role in visual contour integration (Li & Gilbert, 2002) and object grouping (Schmidt et al., 2006).\nThough both horizontal and feedback connections are believed to be important for visual processing in the visual cortex, the regular convolution ignores the properties of these connections. In this work, we particularly focus on algorithms to introduce the function of horizontal connections for the regular convolution in CNNs. We propose spatially shuffled convolution (ss convolution), where the information outside of the regular convolution’s RF is incorporated by spatial shuffling, which is a simple and lightweight operation. Our ss convolution is the same operation as the regular convolution except for spatial shuffling and requires no extra learnable parameters. The design of ss convolution is highly inspired by the function of horizontal connections. To test the effectiveness of the information outside of the regular convolution’s RF in CNNs, we perform experiments on CIFAR-10 (Krizhevsky, 2009) and ImageNet 2012 dataset (Russakovsky et al., 2015) and show that ss convolution improves the classification performance across various CNNs. These results indicate that the information outside of the RF is useful when processing local information. In addition, we\nconduct several analyses to examine why ss convolution improves the classification performance in CNNs and show that spatial shuffling allows the regular convolution to use the information outside of its RF." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 VARIANTS OF CONVOLUTION LAYERS AND NEURAL MODULES", "text": "There are two types of approaches to improve the Receptive Field (RF) of CNNs with the regular convolution: broadening kernel of convolution layer and modulating activation values by selfattention.\nBroadening Kernel:\nThe atrous convolution (Holschneider et al., 1989; Yu & Koltun, 2016) is the convolution with the strided kernel. The stride is not learnable and given in advance. The atrous convolution can have larger RF compared to the regular convolution with the same computational complexity and the number of learnable parameters.\nThe deformable convolution (Dai et al., 2017) is the atrous convolution with learnable kernel stride that depends on inputs and spatial locations. The stride of the deformable convolution is changed flexibly unlike the atrous convolution, however, the deformable convolution requires extra computations to calculate strides.\nBoth atrous and deformable convolution contribute to broadening RF, however, it is not plausible to use the pixel information at a distant location when processing local information. Let us consider the case that the information of p pixels away is useful for processing local information at layer l. In the simple case, it is known that the size of the RF grows with k √ n where k is the size of the convolution kernel and n is the number of layers (Luo et al., 2016). In this case, the size of kernel needs to be p√\nn and k is around 45 when p = 100 and l = 5. If the kernel size is 3 × 3, then\nthe stride needs to be 21 across layers. Such large stride causes both the atrous and the deformable convolution to have a sparse kernel and it is not suitable for processing local information.\nSelf-Attention:\nSqueeze and Excitation module (SE module) (Hu et al., 2018) is proposed to modulate the activation values by using the global context which is obtained by Global Average Pooling (GAP) (Lin et al., 2014). SE module allows CNNs with the regular convolution to use the information outside of its RF as our ss convolution does. In our experiments, ss convolution gives the marginal improvements on SEResNet50 (Hu et al., 2018) that is ResNet50 (He et al., 2016) with SE module. This result makes us wonder why ss convolution improves the performance of SEResNet50, thus we conduct the analyses and find that the RF of SEResNet50 is location independent and the RF of ResNet with ss convolution is the location-dependent. This result is reasonable since the spatial information of activation values is not conserved by GAP in SE module. We conclude that such a difference may be the reason why ss convolution improves the classification performance on SEResNet50.\nAttention Branch Networks (ABN) (Fukui et al., 2019) is proposed for a top-down visual explanation by using an attention mechanism. ABN uses the output of the side branch to modulate activation values of the main branch. The outputs of the side branch have larger RF than the one of the main branch, thus the main branch is able to modulate the activation values based on the information outside of main branch’s RF. In our experiments, ss convolution improves the performance on ABN and we assume that this is because ABN works as like feedbacks from higher-order areas, unlike ss convolution that is inspired by the function of horizontal connections." }, { "heading": "2.2 UTILIZATION OF SHUFFLING IN CNNS", "text": "ShuffleNet (Zhang et al., 2017) is designed for computation-efficient CNN architecture and the group convolution (Krizhevsky et al., 2012) is heavily used. They shuffle the channel to make cross-group information flow for multiple group convolution layers.\nThe motivation of using shuffling between ShuffleNet and our ss convolution is different. On the one hand, our ss convolution uses spatial shuffling to use the information from outside of the regular\nconvolution’s RF. On the other hand, the channel shuffling in ShuffleNet does not broaden RF and not contribute to use the information outside of the RF." }, { "heading": "3 METHOD", "text": "In this section, we introduce spatially shuffled convolution (ss convolution)." }, { "heading": "3.1 SPATIALLY SHUFFLED CONVOLUTION", "text": "Horizontal connections are the mechanism to use information outside of the RF. We propose ss convolution to incorporate this mechanism into the regular convolution, which consists of two components: spatial shuffling and regular convolution. The shuffling is based on a permutation matrix that is generated at the initialization. The permutation matrix is fixed while training and testing.\nOur ss convolution is defined as follows:\nyi,j = C∑ c ∑ ∆i,∆j∈R wc,∆i,∆j · P (xc,i+∆i,j+∆j), (1)\nP (xc,i,j) = { π(xc,i,j), c ≤ bαCc, xc,i,j , otherwise.\n(2)\nR represents the offset coordination of the kernel. For examples, the case of the 3 × 3 kernel is R = {(−1,−1), (−1, 0), (−1, 1), (0,−1), (0, 0), (0, 1), (1,−1), (1, 0), (1, 1)}. x ∈ RC×I×J is the input and w ∈ RCw×Iw×Jw is the kernel weights of the regular convolution. In Eqn. (2), the input x is shuffled by P and then the regular convolution is applied. Fig. 1-(a) is the visualization of Eqn. (2). α ∈ [0, 1] is the hyper-parameter to control how many channels are shuffled. If bαCc = 0, then ss convolution is same as the regular convolution. At the initialization, we randomly generate the permutation matrix π ∈ {0, 1}m×m where ∑m i=1 πi,j = 1, ∑m j=1 πi,j = 1 and m = I · J · bαCc1. The generated π at the initialization is fixed for training and testing.\nThe result of CIFAR-10 across various α is shown in Fig. 2. The biggest improvement of the classification performance is obtained when α is around 0.06." }, { "heading": "3.2 SPATIALLY SHUFFLED GROUP CONVOLUTION", "text": "The group convolution (Krizhevsky et al., 2012) is the variants of the regular convolution. We find that the shuffling operation of Eqn. 2 is not suitable for the group convolution. ResNeXt (Xie et al., 2017) is CNN to use heavily group convolutions and Table 1 shows the test error of ResNeXt in CIFAR-10 (Krizhevsky, 2009). As can be seen in Table 1, the improvement of the classification performance is marginal with Eqn. 2. Thus, we propose the spatial shuffling for the\n1We implement Eqn. 2 by indexing, thus we hold m long int instead of m × m binary matrix. The implementation of ss convolution is shown in Appendix A.2.\ngroup convolution as follows:\nP (xc,i,j)= { π(xc,i,j), 0 ≡ C mod b 1αc, xc,i,j , otherwise.\n(3)\nEqn. 3 represents that the shuffled parts are interleaved like the illustration in Fig. 1-(b). As can be seen in Table 1, ss convolution with Eqn. 3 improves the classification performance of ResNeXt." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 PREPARATIONS", "text": "We use CIFAR-10 (Krizhevsky, 2009) and ImageNet-1k (Russakovsky et al., 2015) for our experiments.\nCIFAR-10. CIFAR-10 is the image classification dataset. There are 50000 training images and 10000 validation images with 10 classes. As data augmentation and preprocessing, translation by 4 pixels, stochastic horizontal flipping, and global contrast normalization are applied onto images with 32× 32 pixels. We use three types of models of WRN 16-4 (Zagoruyko & Komodakis, 2016), DenseNet-BC 12-100 (Huang et al., 2017) and ResNeXt 2-64d (Xie et al., 2017).\nImageNet-1k. ImageNet-1k is the large scale dataset for the image classification. There are 1.28M training images and 50k validation images with 1000 classes. As data augmentation and preprocessing, resizing images with the scale and aspect ratio augmentation and stochastic horizontal flipping are applied onto images. Then, global contrast normalization is applied to randomly cropped images with 224× 224 pixels. In this work, we use ResNet50 (He et al., 2016), DenseNet121 (Huang et al., 2017), SEResNet50 (Hu et al., 2018) and ResNet50 with ABN (Fukui et al., 2019) for ImageNet-1k experiments.\nImplementation Details. As the optimizer, we use Momentum SGD with momentum of 0.9 and weight decay of 1.0 × 10−4. In CIFAR-10, we train models for 300 epochs with 64 batch size. In ImageNet, we train models for 100 epochs with 256 batch size. In CIFAR-10 and ImageNet, the learning rate starts from 0.1 and is divided by 10 at 150, 250 epochs and 30, 60, 90 epochs, respectively.\nmodel method test error (%)\nWRN 16-4 Conv 5.1 SS Conv 4.8\nDenseNet-BC Conv 5.0 SS Conv 4.6\nResNeXt-29 (2x64d) Conv 4.3 SS Conv 3.9\nTable 2: The result of CIFAR-10. The test error at the last epoch is used and the reported results are averaged out of 5 runs.\nmodel method top-1 err (%)\nResNet50 Conv 23.5 SS Conv 23.0\nResNet50 w/ ABN Conv 22.9 SS Conv 22.6\nSEResNet50 Conv 22.7 SS Conv 22.6\nDenseNet121 Conv 24.5 SS Conv 24.1\nResNeXt-50 (32x4d) Conv 22.4 SS Conv 22.0\nIn our experiments, we replace all regular convolutions with ss convolutions except for downsampling layers, and use single α across all layers. We conduct grid search of α ∈ {0.02, 0.04, 0.06} and α is decided according to the classification performance on validation dataset." }, { "heading": "4.2 RESULTS", "text": "We replace all regular convolutions with ss convolutions to investigate whether the information outside of the regular convolution’s RF contributes to improving the generalization ability. The results are shown in Table 2 and 3. As can be seen in Table 2 and 3, ss convolution contributes to improve the classification performance across various CNNs except for SEResNet50 that shows marginal improvements. The detailed analysis of the reason why ss convolution gives the marginal improvements in SEResNet50 is shonw in Sec. 5\nSince α ∈ {0.02, 0.04, 0.06} is small, the small portion of the input are shuffled, thus ss convolution improves the classification performance with small amount of extra shuffling operations and without extra learnable parameters. The inference speed is shown in Table 4 and ss convolution make the inference speed 1.15 times slower in exchange for 0.5% improvements in ImageNet-1k dataset. The more efficient implementation2 may decrease the gap of the inference speed between the regular convolution and ss convolution." }, { "heading": "5 ANALYSIS", "text": "In this section, we demonstrate two analysis to understand why ss convolution improves the classification performance across various CNNs: the receptive field (RF) analysis and the layer ablation experiment.\n2Our implementation is shown in Appendix A.2.\nReceptive Field Analysis. We calculate the RF of SEResNet50, ResNet50 with ss convolution and the regular convolution. The purpose of this analysis is to examine whether ss convolution contributes to use the information outside of the regular convolution’s RF.\nLayer Ablation Experiment. The layer ablation experiment is conducted to know which ss convolution influences the model prediction. In the primary visual cortex, the neurons change selfproperties based on the information outside of RF, thus we would like to investigate whether spatial shuffling in low layers contribute to predictions or not.\nOur analyses are based on ImageNet-1k pre-trained model and the structure of ResNet50 (i.e., the base model for analysis) is shown in Table 5." }, { "heading": "5.1 DOES SPATIAL SHUFFLING CONTRIBUTE TO USE THE INFORMATION FROM OUTSIDE OF RECEPTIVE FIELD?", "text": "In our analysis, we calculate the RF to investigate whether ss convolution uses the information outside of the regular convolution’s RF. The receptive field is obtained by optimization as follows:\nR∗ = argmin R ‖(M · φl(σ(R) · x)− φl(x)‖22 + β ‖σ(R)‖1 . (4)\nx ∈ RC×I×J is input, and R ∈ RC×I×J is the RF to calculate and learnable. σ is sigmoid function, thus 0 ≤ σ(R) ≤ 1. φl is the outpus of the trained model at the layer l. We call the first term in Eqn. 4 as the local perceptual loss. It is similar to the perceptual loss (Johnson et al., 2016), and the difference is the dot product of M ∈ {0, 1}C×I×J that works as masking. M is the binary mask and Mcij = 1 if 96 ≤ i, j ≤ 128, otherwise Mcij = 0 in our analysis. In other words, the values inside the blue box in Fig 3 are the part of Mcij = 1. The local perceptual loss minimizes the distance of feature on the specific region between σ(R) ·x and x. The 2nd term is the penalty to evade the trivial case such as σ(R) = 1. In layerwise and channelwise RF anlysis, we use β of 1.0× 10−6 and 1.0× 10−12, respectively. We use Adam optimizer (Kingma & Ba, 2015) to calculate R∗. As the hyper-parameter, lr, β1, and β2 are 0.1, 0.9, 0.99, respectively. The high lr is used since its slow convergence. The batch size is 32 and we stop the optimization after 10000 iterations. x is randomly selected from ImageNet-1k training images. The data augmentation and preprocessing are applied as the same procedure in Sec. 4.1.\nLayerwise Receptive Field. We calculate the RF for each model and the results are shown in Fig. 3. The top row is the RF of ResNet with the regular convolution, the middle row is the one with ss convolution and the bottom row is the one of SEResNet50. The red color indicates that the value of the pixel there changes features inside the blue box, and the white color represents that features inside the blue box are invariant even if the value of the pixel there is changed.\nIn the top row of Fig. 3, the RFs of ResNet50 with the regular convolution are shown. The size of RF becomes larger as the layer becomes deeper. This result is reasonable and obtained RFs are in the classical view. If the RF of ResNet50 with ss convolution is beyond the one with the regular convolution, it indicates that ss convolution successfully uses the information outside of the regular convolution’s RF.\nIn the middle and bottom row of Fig. 3 are the RF of ResNet50 with ss convolution and SEResNet50, respevtively. The RFs covers the entire image unlike the RF with the regular convolution. These results indicate that both SE module and ss convolution contributes to use the information outside of the regular convolution’s RF.\nFig. 5-(a) shows the size of the RF across layers. The horizontal axis is the name of the layer and the vertical axis represents the size of RF that is calculated as ‖σ(R)≥0.5‖1|R| where ‖ ‖1 is the L1 norm and | | is the total size of matrix. ‖σ(RF)≥0.5‖1|RF| is in the range between 0 and 1 and represents the ratio of σ(RF) that is bigger than 0.5. As can be seen in Fig. 5-(a), the size of RFs are consistently almost 1 across layers in ResNet50 with ss convolution and SEResNet50. This result also shows that SE module and the ss convolution contributes to use the information outside of the regular convolution’s RF. This may be the reason why ss convolution improves the classification performance on various CNNs. However, these results make us wonder why ss convolution improves marginally the performance of SEResNet50. Further analysis is conducted in channelwise RF analysis.\nChannelwise Receptive Field. Since layerwise RF analysis is based on the RF of the layer, the obtained results have rough directions. We calculate the channelwise RF for more fine-grained analysis. Unlike layerwise RF analysis, M becomes different and we minimize the local perceptual loss on the specific channel. The results are shown in Fig. 3. Fig. 4 (a) and (c) use Mcij = 1 if 64 ≤ i, j ≤ 96 and c = 64, otherwise Mcij = 0. Fig. 4 (b) and (d) use Mcij = 1 if 96 ≤ i, j ≤ 128 and c = 64, otherwise Mcij = 0. Fig. 4 (a)-(b) are the RF of ResNet50 with ss convolution and (c)-(d) are tbe RF of SEResNet50. As can be seen in Fig. 3, the RFs of ResNet50 with ss convolution (i.e., Fig. 4 (a)-(b)) are different when the blue box is shifted, however, the RFs of SEResNet50 (i.e., Fig. 4 (c)-(d)) are similar even if the blue box is shifted. These results indicate that the information outside of the regular convolution’s RF is location-independent in SEResNet 50 and location-dependent in ResNet50 with ss convolution. This is reasonable since SE module uses the global average pooling and the spatial information is not conserved. This difference may be the reason why ss convolution marginally improves the classification performance on SEResNet50." }, { "heading": "5.2 ABLATION STUDY", "text": "We conduct layer ablation study to investigate which ss convolutions contribute to the generalization ability. The ablation is done as follows:\nP (xc,i,j) = { 0, c ≤ bαCc, xc,i,j , otherwise.\n(5)\nEqn. 5 represents that the activation values of the shuffled parts become 0. The result of the ablation experiment is shown in Fig. 5-(b). Eqn. 5 is applied to all ss convolutions in each block and the biggest drop of the classification performance happens at the ablation of conv4 4. It indicates that it is useful to use the information outside of the regular convolution’s RF between the middle and high layers. The classification performance is degraded even if the ablation is applied to the first bottleneck (i.e., conv2 1). This result implies that the information outside of the regular convolution’s RF is useful even at low layers." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose spatially shuffled convolution (ss convolution) to incorporate the function of horizontal connections in the regular convolution. The spatial shuffling is simple, lightweight, and requires no extra learnable parameters. The experimental results demonstrate that ss convolution captures the information outside of the regular convolution’s RF even in lower layers. The results and our analyses also suggest that using distant information (i.e., non-local) is effective for the regular convolution and improves classification performance across various CNNs." }, { "heading": "A APPENDIX", "text": "A.1 RESULTS OF LAYERWISE RECEPTIVE FIELD\nThe receptive fields of all layers are shown in Fig. 6, 7 and 8.\nA.2 EXAMPLE OF CODE\nThe code of spatially shuffled convolution is shown in Listing 1. It is written in python with Pytorch (Paszke et al., 2017) of 1.0.1.post2 version. The training and model codes will be available online after the review.\n1 i m p o r t t o r c h 2 3 4 c l a s s SSConv2d ( t o r c h . nn . Module ) : 5 6 d e f i n i t ( s e l f , i n p l a n e s , o u t p l a n e s , k e r n e l s i z e =3 , s t r i d e =1 , padd ing =1 , b i a s =None , g r ou ps =1 , d i l a t i o n =1 , a l p h a = 0 . 0 4 ) : 7 s u p e r ( SSConv2d , s e l f ) . i n i t ( ) 8 s e l f . conv = t o r c h . nn . Conv2d ( i n p l a n e s , o u t p l a n e s , k e r n e l s i z e = k e r n e l s i z e , s t r i d e = s t r i d e , padd ing = padding , g ro up s = groups , d i l a t i o n = d i l a t i o n , b i a s = b i a s ) 9 s e l f . a lpha , s e l f . g ro up s = a lpha , g ro up s\n10 11 d e f c r e a t e s h u f f l e i n d i c e s ( s e l f , x ) : 12 , i n p l a n e s , h e i g h t , w id th = x . s i z e ( ) 13 s e l f . s h u f f l e u n t i l h e r e = i n t ( i n p l a n e s ∗ s e l f . a l p h a ) 14 # i f s e l f . s h u f f l e u n t i l h e r e = 0 , t h e n i t ’ s e x a c t l y same as r e g u l a r c o n v o l u t i o n 15 i f s e l f . s h u f f l e u n t i l h e r e >= 1 : 16 s e l f . r e g i s t e r b u f f e r ( ’ r a n d o m i n d i c e s ’ , t o r c h . randperm ( s e l f . s h u f f l e u n t i l h e r e ∗ h e i g h t ∗ wid th ) ) 17 18 @ s t a t i c m e t h o d 19 d e f g r o u p ( s h u f f l e d x , n o n s h u f f l e d x ) : 20 ba tch , ch ns , h e i g h t , w id th = n o n s h u f f l e d x . shape 21 , ch s , , = s h u f f l e d x . shape 22 l e n g t h = i n t ( c h n s / c h s ) 23 r e s i d u e = c h n s − l e n g t h ∗ c h s 24 # s h u f f l e d x i s i n t e r l e a v e d 25 i f r e s i d u e == 0 : 26 r e t u r n t o r c h . c a t ( ( s h u f f l e d x . unsqueeze ( 1 ) , n o n s h u f f l e d x . view ( ba tch , l e n g t h , ch s , h e i g h t , w id th ) ) , 1 ) . view ( ba tch , c h n s + ch s , h e i g h t , w id th ) 27 e l s e : 28 r e t u r n t o r c h . c a t ( ( t o r c h . c a t ( ( s h u f f l e d x . unsqueeze ( 1 ) , n o n s h u f f l e d x [ : , r e s i d u e : ] .\nview ( ba tch , l e n g t h , ch s , h e i g h t , w id th ) ) , 1 ) . view ( ba tch , c h n s + c h s − r e s i d u e , h e i g h t , w id th ) , n o n s h u f f l e d x [ : , : r e s i d u e ] ) , 1 )\n29 30 d e f s h u f f l e ( s e l f , x ) : 31 i f s e l f . s h u f f l e u n t i l h e r e >= 1 : 32 # s s c o n v o l u t i o n 33 s h u f f l e d x , n o n s h u f f l e d x = x [ : , : s e l f . s h u f f l e u n t i l h e r e ] , x [ : , s e l f . s h u f f l e u n t i l h e r e : ] 34 ba tch , ch , h e i g h t , w id th = s h u f f l e d x . s i z e ( ) 35 s h u f f l e d x = t o r c h . i n d e x s e l e c t ( s h u f f l e d x . view ( ba tch , −1) , 1 , s e l f . r a n d o m i n d i c e s ) . view ( ba tch , ch , h e i g h t , w id th ) 36 i f s e l f . g ro ups >= 2 : 37 r e t u r n s e l f . g r o u p ( s h u f f l e d x , n o n s h u f f l e d x ) 38 e l s e : 39 r e t u r n t o r c h . c a t ( ( s h u f f l e d x , n o n s h u f f l e d x ) , 1 ) 40 e l s e : 41 # r e g u l a r c o n v o l u t i o n 42 r e t u r n x 43 44 d e f f o r w a r d ( s e l f , x ) : 45 i f h a s a t t r ( s e l f , ’ r a n d o m i n d i c e s ’ ) i s F a l s e : 46 # c r e a t e random p e r m u t a t i o n m a t r i x a t i n i t i a l i z a t i o n 47 s e l f . c r e a t e s h u f f l e i n d i c e s ( x ) 48 # s p a t i a l s h u f f l i n g 49 x = s e l f . s h u f f l e ( x ) 50 # r e g u l a r c o n v o l u t i o n 51 x = s e l f . conv ( x ) 52 r e t u r n x\nListing 1: Implementation of Spatially Shuffled Convolution" } ]
2,019
null
SP:aec7ce88f21b38c205522c88b3a3253e24754182
[ "A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs. An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline.", "This paper handles the challenge of generating generalizable programs from input-output specifications when the size of the specification can be quite limited and therefore ambiguous. When proposed candidate programs lead to divergent outputs on a new input, the paper proposes to use a learned neural oracle that can evaluate which of the outputs are most likely. The paper applies their technique to the task of synthesizing Android UI layout code from labels of components and their positions." ]
A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two inputoutput examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.
[ { "affiliations": [], "name": "Larissa Laich" }, { "affiliations": [], "name": "Pavol Bielik" }, { "affiliations": [], "name": "Martin Vechev" } ]
[ { "authors": [ "Matej Balog", "Alexander L. Gaunt", "Marc Brockschmidt", "Sebastian Nowozin", "Daniel Tarlow" ], "title": "Deepcoder: Learning to write programs", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Daniel W. Barowy", "Sumit Gulwani", "Ted Hart", "Ben Zorn" ], "title": "Flashrelate: Extracting relational data from semi-structured spreadsheets using examples", "venue": "Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation", "year": 2014 }, { "authors": [ "Pavol Bielik", "Veselin Raychev", "Martin Vechev" ], "title": "Program synthesis for character level language modeling", "venue": "In 5rd International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Pavol Bielik", "Marc Fischer", "Martin Vechev" ], "title": "Robust relational layout synthesis from examples for android", "venue": "Proc. ACM Program. Lang.,", "year": 2018 }, { "authors": [ "Matko Bosnjak", "Tim Rocktäschel", "Jason Naradowsky", "Sebastian Riedel" ], "title": "Programming with a differentiable forth interpreter", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Matko Bošnjak", "Tim Rocktäschel", "Jason Naradowsky", "Sebastian Riedel" ], "title": "Programming with a differentiable forth interpreter", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Towards synthesizing complex programs from inputoutput examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Execution-guided neural program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Loris D’Antoni", "Roopsha Samanta", "Rishabh Singh" ], "title": "Qlose: Program repair with quantiative objectives", "venue": "In 27th International Conference on Computer Aided Verification", "year": 2016 }, { "authors": [ "Leonardo De Moura", "Nikolaj Bjørner. Z" ], "title": "An efficient smt solver", "venue": "In Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems,", "year": 2008 }, { "authors": [ "Biplab Deka", "Zifeng Huang", "Chad Franzen", "Joshua Hibschman", "Daniel Afergan", "Yang Li", "Jeffrey Nichols", "Ranjitha Kumar" ], "title": "Rico: A mobile app dataset for building data-driven design applications", "venue": "In Proceedings of the 30th Annual Symposium on User Interface Software and Technology,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Jonathan Uesato", "Surya Bhupatiraju", "Rishabh Singh", "Abdel-rahman Mohamed", "Pushmeet Kohli" ], "title": "Robustfill: Neural program learning under noisy I/O", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Kevin Ellis", "Sumit Gulwani" ], "title": "Learning to learn programs from examples: Going beyond program structure", "venue": "In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Kevin Ellis", "Daniel Ritchie", "Armando Solar-Lezama", "Josh Tenenbaum" ], "title": "Learning to infer graphics programs from hand-drawn images", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kevin Ellis", "Maxwell Nye", "Yewen Pu", "Felix Sosa", "Josh Tenenbaum", "Armando Solar-Lezama" ], "title": "Write, execute, assess: Program synthesis with a repl", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Alexander L. Gaunt", "Marc Brockschmidt", "Rishabh Singh", "Nate Kushman", "Pushmeet Kohli", "Jonathan Taylor", "Daniel Tarlow" ], "title": "Terpret: A probabilistic programming language for program induction", "venue": null, "year": 2016 }, { "authors": [ "Sumit Gulwani" ], "title": "Automating string processing in spreadsheets using input-output examples", "venue": "In Thomas Ball and Mooly Sagiv (eds.), Proceedings of the 38th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2011 }, { "authors": [ "Brian Hempel", "Ravi Chugh" ], "title": "Semi-automated svg programming via direct manipulation", "venue": "In Proceedings of the 29th Annual Symposium on User Interface Software and Technology,", "year": 2016 }, { "authors": [ "Arun Iyer", "Manohar Jonnalagedda", "Suresh Parthasarathy", "Arjun Radhakrishna", "Sriram K. Rajamani" ], "title": "Synthesis and machine learning for heterogeneous extraction", "venue": "In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2019 }, { "authors": [ "Susmit Jha", "Sumit Gulwani", "Sanjit A. Seshia", "Ashish Tiwari" ], "title": "Oracle-guided component-based program synthesis", "venue": "In Proceedings of the 32Nd ACM/IEEE International Conference on Software Engineering - Volume 1,", "year": 2010 }, { "authors": [ "Ashwin Kalyan", "Abhishek Mohta", "Oleksandr Polozov", "Dhruv Batra", "Prateek Jain", "Sumit Gulwani" ], "title": "Neural-guided deductive search for real-time program synthesis from examples", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vu Le", "Sumit Gulwani" ], "title": "Flashextract: a framework for data extraction by examples", "venue": "ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2014 }, { "authors": [ "Woosuk Lee", "Kihong Heo", "Rajeev Alur", "Mayur Naik" ], "title": "Accelerating search-based program synthesis using learned probabilistic models", "venue": "In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2018 }, { "authors": [ "Percy Liang", "Michael I. Jordan", "Dan Klein" ], "title": "Learning programs: A hierarchical bayesian approach", "venue": "Proceedings of the 27th International Conference on Machine Learning", "year": 2010 }, { "authors": [ "Fan Long", "Martin Rinard" ], "title": "Automatic patch generation by learning correct code", "venue": "In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2016 }, { "authors": [ "David Mandelin", "Lin Xu", "Rastislav Bodík", "Doug Kimelman" ], "title": "Jungloid mining: Helping to navigate the api jungle", "venue": "In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2005 }, { "authors": [ "Aditya Krishna Menon", "Omer Tamuz", "Sumit Gulwani", "Butler Lampson", "Adam Tauman Kalai" ], "title": "A machine learning framework for programming by example", "venue": "In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume", "year": 2013 }, { "authors": [ "Vijayaraghavan Murali", "Swarat Chaudhuri", "Chris Jermaine" ], "title": "Bayesian sketch learning for program synthesis", "venue": null, "year": 2017 }, { "authors": [ "Hoang Duong Thien Nguyen", "Dawei Qi", "Abhik Roychoudhury", "Satish Chandra" ], "title": "Semfix: Program repair via semantic analysis", "venue": "In Proceedings of the 2013 International Conference on Software Engineering,", "year": 2013 }, { "authors": [ "Maxwell I. Nye", "Luke B. Hewitt", "Joshua B. Tenenbaum", "Armando Solar-Lezama" ], "title": "Learning to infer program sketches", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "title": "Neuro-symbolic program synthesis", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Phitchaya Mangpo Phothilimthana", "Aditya Thakur", "Rastislav Bodik", "Dinakar Dhurjati" ], "title": "Scaling up superoptimization", "venue": "In Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2016 }, { "authors": [ "Illia Polosukhin", "Alexander Skidanov" ], "title": "Neural program search: Solving data processing tasks from description and examples, 2018", "venue": "URL https://openreview.net/forum?id= B1KJJf-R-", "year": 2018 }, { "authors": [ "Oleksandr Polozov", "Sumit Gulwani" ], "title": "Flashmeta: A framework for inductive program synthesis", "venue": "In OOPSLA 2015 Proceedings of the 2015 ACM SIGPLAN International Conference on ObjectOriented Programming,", "year": 2015 }, { "authors": [ "Veselin Raychev", "Pavol Bielik", "Martin T. Vechev", "Andreas Krause" ], "title": "Learning programs from noisy data", "venue": "In Rastislav Bodík and Rupak Majumdar (eds.), Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2016 }, { "authors": [ "Scott Reed", "Nando de Freitas" ], "title": "Neural programmer-interpreters", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Eric Schkufza", "Rahul Sharma", "Alex Aiken" ], "title": "Stochastic program optimization", "venue": "Commun. ACM,", "year": 2016 }, { "authors": [ "Eui Chul Shin", "Miltiadis Allamanis", "Marc Brockschmidt", "Alex Polozov" ], "title": "Program synthesis and semantic parsing with learned code idioms", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Richard Shin", "Illia Polosukhin", "Dawn Song" ], "title": "Improving neural program synthesis with inferred execution traces", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Richard Shin", "Neel Kant", "Kavi Gupta", "Chris Bender", "Brandon Trabucco", "Rishabh Singh", "Dawn Song" ], "title": "Synthetic datasets for neural program synthesis", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rishabh Singh" ], "title": "Blinkfill: Semi-supervised programming by example for syntactic string transformations", "venue": "PVLDB, 9(10):816–827,", "year": 2016 }, { "authors": [ "Rishabh Singh", "Sumit Gulwani" ], "title": "Synthesizing number transformations from input-output examples", "venue": "Computer Aided Verification,", "year": 2012 }, { "authors": [ "Rishabh Singh", "Sumit Gulwani" ], "title": "Predicting a correct program in programming by example", "venue": "Computer Aided Verification,", "year": 2015 }, { "authors": [ "Rishabh Singh", "Sumit Gulwani" ], "title": "Transforming spreadsheet data types using examples", "venue": "In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2016 }, { "authors": [ "Rishabh Singh", "Sumit Gulwani", "Armando Solar-Lezama" ], "title": "Automated feedback generation for introductory programming assignments", "venue": "In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2013 }, { "authors": [ "Shao-Hua Sun", "Hyeonwoo Noh", "Sriram Somasundaram", "Joseph Lim" ], "title": "Neural program synthesis from diverse demonstration videos", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Over the years, program synthesis has been applied to a wide variety of different tasks including string, number or date transformations (Gulwani, 2011; Singh & Gulwani, 2012; 2016; Ellis et al., 2019; Menon et al., 2013; Ellis & Gulwani, 2017), layout and graphic program generation (Bielik et al., 2018; Hempel & Chugh, 2016; Ellis et al., 2019; 2018), data extraction (Barowy et al., 2014; Le & Gulwani, 2014; Iyer et al., 2019), superoptimization (Phothilimthana et al., 2016; Schkufza et al., 2016), code repair (Singh et al., 2013; Nguyen et al., 2013; D’Antoni et al., 2016), language modelling (Bielik et al., 2017), synthesis of data processing programs (Polosukhin & Skidanov, 2018; Nye et al., 2019) or semantic parsing Shin et al. (2019a). To capture user intent in an easy and intuitive way, many program synthesizers let its users provide a set of input-output examples I which the synthesized program should satisfy.\nGeneralization challenge A natural expectation of the end user in this setting is that the synthesized program works well even when I is severely limited (e.g., to one or two examples). Because of this small number of examples and the big search space of possible programs, there are often millions of programs consistent with I. However, only a small number of them generalizes well to unseen examples which makes the synthesis problem difficult.\nExisting methods Several approaches have provided ways to address the above challenge, including using an external model that learns to rank candidate programs returned by the synthesizer, modifying the search procedure by learning to guide the synthesizer such that it returns more likely programs directly, or neural program induction methods that replace the synthesizer with a neural network to generate outputs directly using a latent program representation. However, regardless of what other features these approaches use, such as conditioning on program traces (Shin et al., 2018; Ellis & Gulwani, 2017; Chen et al., 2019) or pre-training on the input data (Singh, 2016), they are limited by the fact that their models are conditioned on the initial, limited user specification.\nThis work We present a new approach for program synthesis from examples which addresses the above challenge. The key idea is to resolve ambiguity by iteratively strengthening the initial specification I with new examples. To achieve this, we start by using an existing synthesizer to find a candidate program p1 that satisfies all examples in I. Instead of returning p1, we use it to find a distinguishing input x∗ that leads to ambiguities, i.e., other programs pi that satisfy I but produce different outputs p1(x∗) 6= pi(x∗). To resolve this ambiguity, we first generate a set of candidate outputs for x∗, then use a neural model (which we train beforehand) that acts as an oracle and selects\nthe most likely output, and finally, add x∗ and its predicted output to the input specification I. The whole process is then repeated. These steps are similar to those used in Oracle Guided Inductive Synthesis (Jha et al., 2010) with two main differences: (i) we automate the entire process by learning the oracle from data instead of using a human oracle, and (ii) as we do not use a human oracle to produce a correct output, we need to ensure that the set of candidate outputs contains the correct one.\nAugmenting an existing Android layout synthesizer In this work we apply our approach to a state-of-the-art synthesizer, called InferUI (Bielik et al., 2018), that creates an Android layout program which represents the implementation of a user interface. Given an application design consisting of a set of views (e.g., buttons, images, text fields, etc.) and their location on the device screen, InferUI synthesizes a layout program that when rendered, places the views at that same location. Concretely, each input-output example (x, y) consists of a device screen x ∈ R4 and a set of n views y ∈ Rn×4, all of which are represented using their coordinates in a two dimensional euclidean space. As an example, the input specification I shown in Figure 1 contains a single example with absolute view positions for a Nexus 4 device and the InferUI synthesizer easily finds multiple programs that satisfy it (dashed box). To apply our method and resolve the ambiguity, we find a distinguishing input x∗, in this case a narrower P4 Pro device, on which some of the candidate programs produce different outputs. Then, instead of asking the user to manually produce the correct output, we generate additional candidate outputs (to ensure that the correct one is included) and our learned neural oracle automatically selects one of these outputs (the one it believes is correct) and adds it to the input specification I. In this case, the oracle selects output p2(x∗) as both buttons are correctly resized and the distance between them was reduced to match the smaller device width. In contrast, p1(x∗) contains overlapping buttons while in pn(x∗), only the left button was resized.\nAutomatically obtaining real-world datasets An important advantage of our approach is that we reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. This is especially useful for domains, such as the Android layout synthesis, for which the output correctness depends mostly on properties of the output and not on the program used to generate it. As a result, obtaining a suitable training dataset can be easier as we do not require the hard to obtain ground-truth programs, for which currently no large real-world datasets exist (Shin et al., 2019b). In fact, it is possible to train the oracle using unsupervised learning only, with a dataset consisting of correct input-output examples DU . For example, in layout synthesis an autoencoder can be trained over a large number of views and their positions extracted from running real-world applications. However, instead of training such an unsupervised model, in our work we useDU to automatically construct a supervised datasetDS by labelling the samples inDU as positive and generating a set of negative samples by adding suitable noise to the samples in DU . Finally, we also obtain the dataset DS+ that additionally includes the input specification I. In the domain of Android layouts, although it is more difficult, such a dataset can also be collected automatically by running the same application on devices with different screen sizes.\nOur contributions We present a new approach to address the ambiguity in the existing Android layout program synthesizer InferUI by iteratively extending the user provided specification with new input-output examples. The key component of our method is a learned neural oracle used to generate new examples trained with datasets that do not require human annotations or ground-truth programs. To improve generalization, InferUI already contains a probabilistic model that scores programs q(p | I) as well as handcrafted robustness properties, achieving 35% generalization accuracy on a dataset of Google Play Store applications. In contrast, our method significantly improves the accuracy to 71% while using a dataset containing only correct and incorrect program outputs. We make our implementation and datasets available online at:\nhttps://github.com/eth-sri/guiding-synthesizers" }, { "heading": "2 RELATED WORK", "text": "In this section we discuss the work most closely related to ours.\nGuiding program synthesis To improve scalability and generalization of program synthesizers several techniques have been proposed that guide the synthesizer towards good programs. The most widely used approach is to implement a statistical search procedure which explores candidate programs based on some type of learned probabilistic model – log-linear models (Menon et al., 2013; Long & Rinard, 2016), hierarchical Bayesian prior (Liang et al., 2010), probabilistic higher order grammar (Lee et al., 2018) or neural network (Balog et al., 2017; Sun et al., 2018). Kalyan et al. (2018) also takes advantage of probabilistic models but instead of implementing a custom search procedure, they use the learned model to guide an existing symbolic search engine. In addition to approaches that search for a good program directly (conditioned on the input specification), a number of works guide the search by first selecting a high-level sketch of the program and then filling in the holes using symbolic (Ellis et al., 2018; Murali et al., 2017; Nye et al., 2019), enumerative or neural search (Bosnjak et al., 2017; Gaunt et al., 2016). A similar idea is also used by (Shin et al., 2018), but instead of generating a program sketch the authors first infer execution traces (or condition on partial traces obtained as the program is being generated (Chen et al., 2019)), which are then used to guide the synthesis of the actual program.\nIn comparison to prior work, a key aspect of our approach is to guide the synthesis by generating additional input-output examples that resolve the ambiguities in the input specification. Guiding the synthesizer in this way has several advantages – (i) it is interpretable and the user can inspect the generated examples, (ii) it can be used to extend any existing synthesizer by introducing a refinement loop around it, (iii) the learned model is independent of the actual synthesizer (and its domain specific language) and instead is focused only on learning the relation between likely and unlikely input-output examples, and (iv) often it is easier to obtain a dataset containing program outputs instead of a dataset consisting of the actual programs. Further, our approach is complementary to prior works as it treats the synthesizer as a black-box that can generate candidate programs. We also note that several prior works explore the design of sophisticated neural architectures that encode input-output examples (Sun et al., 2018; Devlin et al., 2017; Parisotto et al., 2017) and incorporating some of their ideas might lead to further improvements to our models presented in Section 4.\nLearning to rank To choose among all programs that satisfy the input specification, existing program synthesizers select the syntactically shortest program (Liang et al., 2010; Polozov & Gulwani, 2015; Raychev et al., 2016), the semantically closest program to a reference program (D’Antoni et al., 2016) or a program based on a learned scoring function (Liang et al., 2010; Mandelin et al., 2005; Singh & Gulwani, 2015; Ellis & Gulwani, 2017; Singh, 2016). Although the scoring function usually extracts features only from the synthesized program, some approaches also take advantage of additional information – Ellis & Gulwani (2017) trains a log-linear model using a set of handcrafted features defined over program traces and program outputs while Singh (2016) leverages unlabelled data by learning common substring expressions shared across the input data.\nSimilar to prior work, our work explores various representations over which the model is learned. Because we applied our work to a domain where outputs can be represented as images (rather than strings or numbers), to achieve good performance we explore different types of models (i.e., convolutional neural networks). Further, we do not assume that the synthesizer can efficiently enumerate\nall programs that satisfy the input specification as in Ellis & Gulwani (2017); Singh (2016). For such synthesizers, applying a ranking of the returned candidates will often fail since the correct program is simply not included in the set of synthesized programs. Therefore, the neural oracle is defined over program outputs instead of actual programs. This reduces the search space for the synthesizer as well as the complexity of the machine learning models.\nNeural program induction Devlin et al. (2017) and Parisotto et al. (2017), as well as related line of work on neural machines (Graves et al., 2016; Reed & de Freitas, 2016; Bošnjak et al., 2017; Chen et al., 2018), explore the design of end-to-end neural approaches that generate the program output for a new input without the need for an explicit search. In this case the goal of the neural network is not to find the correct program explicitly, but rather to generate the most likely output for a given input based on the input specification. These approaches can be integrated in our work as one technique for generating a set of candidate outputs for a given distinguishing input instead of obtaining them using a symbolic synthesizer. However, the model requirements in our work are much weaker – it is enough if the correct output is among the top n most likely candidates rather than requiring 100% precision for all possible inputs as in program induction." }, { "heading": "3 LEARNING TO GENERATE NEW INPUT-OUTPUT EXAMPLES", "text": "Let I={(xi, yi)}Ni=1 denote the input specification consisting of user provided input-output examples. Further, assume we are given an existing synthesizer which can find a program p satisfying all examples in I, i.e., ∃p ∈ L,∀(xi, yi) ∈ I. p(xi)=yi, where p(xi) is the output obtained by running program p on input xi and L is a hypothesis space of valid programs. To reduce clutter, we use the notation p |= I to denote that p satisfies all examples in I. We extend the synthesizer such that a program p not only satisfies all examples in I but also generalizes to unseen examples as follows:\n1. Generate a candidate program p1 |= I that satisfies the input specification I. 2. Find a distinguishing input x∗, a set of programs p2, . . . , pn that satisfy the input specifica-\ntion I but produce different outputs when evaluated on x∗, and define candidate outputs as y={p1(x∗), p2(x∗), · · · , pn(x∗)}. If no distinguishing input x∗ exists, return program p1. 3. Query an oracle to determine the correct output y∗ ∈ y for the input x∗. 4. Extend the input specification with the distinguishing input and its corresponding output I ← I ∪ {(x∗, y∗)} and continue with the first step.\nFinding a distinguishing input To find the distinguishing input x∗ we take advantage of existing symbolic synthesizers by asking the synthesizer to solve ∃x∗∈X , p2∈L. p2 |=I∧p2(x∗) 6= p1(x∗), where X denotes a set of valid inputs. The result is both a distinguishing input x∗ as well as a program p2 that produces a different output than p1. Programs p1 and p2 form the initial sequence of candidate outputs y=[p1(x∗); p2(x∗)] which is extended until the oracle is confident enough that y contains the correct output (described later in this section).\nTo make our approach applicable to any existing synthesizer, including those that can not solve the above satisfiability query directly (e.g., statistical synthesizers), we note that the following sampling approach can also be applied: first, use the synthesizer to generate the top n most likely programs, then randomly sample a valid input x∗ not in the input specification, and finally check if that input leads to ambiguities by computing the output of all candidate programs.\nFinding candidate outputs To extend y with additional candidate outputs once the distinguishing input x∗ is found, three techniques can be applied: (i) querying the synthesizer for another program with a different output: ∃p ∈ L. p |= I ∧∀yi∈yp(x∗) 6= yi, (ii) sampling a program induction model P (y | x∗, I) and using the synthesizer to check whether each sampled output is valid, or (iii) simply sampling more candidate programs, running them on x∗ and keeping the unique outputs. It is possible to use the second approach as we are only interested in the set of different outputs, rather than the actual programs. The advantage of (i) is that it is simple to implement for symbolic synthesizers and is guaranteed to find different outputs if they exist. In contrast, (ii) has the potential to be faster as it avoids calling the synthesizer and works for both statistical and symbolic synthesizers. Finally, (iii) is least effective, but it does not require pretraining and can be applied to any synthesizer.\nNeural oracle The key component of our approach is a neural oracle which selects the correct program output from a set of candidate outputs y. Formally, the neural oracle is defined as argmaxy∗∈y fθ(3 | x∗, y∗, I), where f is a function with learnable parameters θ (in our case a neural network) that returns the probability of the output y∗ being correct given the input x∗ and the input specification I. We train the parameters θ using a supervised dataset DS+ = {(3, xi, yi, Ii)}Ni=1 ∪ {(7, xj , yj , Ij)}Mj=1 which, for a given distinguishing input x∗ and input specification I, contains both the correct (3) as well as the incorrect (7) outputs. Because it might difficult to obtain such a dataset in practice, we also define a simpler model fθ(3 | x∗, y∗) that is trained using a supervised dataset DS = {(3, xi, yi)}Ni=1 ∪ {(7, xj , yj)}Mj=1 which does not include the input specification. In the extreme case, where the dataset contains only the correct inputoutput examples DU = {(xi, yi)}Ni=1, we define the oracle as fθ(y∗ | x∗). Even though the dataset does not contain any labels, we can still train f in an unsupervised manner. This can be achieved for example by splitting the output into smaller parts yi = y1i · · · yti (such as splitting a word into characters) and training f as an unsupervised language model. Alternatively, we could also train an autoencoder that first compresses the output into a lower dimensional representation, with the loss corresponding to how well it can be reconstructed. To achieve good performance, the architecture used to represent f is tailored to the synthesis domain at hand, as discussed in the next section.\nDynamically controlling the number of generated candidate outputs Since generating a large number of candidate outputs is time consuming, we allow our models to dynamically control the number of sampled candidates for each distinguishing input x∗. That is, instead of generating all candidate outputs y first and only then querying the neural oracle to select the most likely one, we query the oracle after each generated candidate output and let the model decide whether more candidates should be generated. Concretely, we define a threshold hyperparameter t ∈ R[0,1] which is used to return the first candidate output y∗ for which the probability of the output being correct is above this threshold. Then, only if there are no candidate outputs with probability higher that t, we return argmax of all the candidates. Note for t= 1 this formulation is equivalent to returning the argmax of all the candidates while for t = 0, it corresponds to always returning the first candidate, regardless of its probability of being correct. We show the effect of different threshold values as well as the number of generated candidate outputs in Section 5." }, { "heading": "4 INSTANTIATION OF OUR APPROACH TO ANDROID LAYOUT SYNTHESIS", "text": "In this section we describe how to apply our approach to the existing Android layout program synthesizer InferUI (Bielik et al., 2018). Here, the input x∈R4 defines the absolute positions of the top left and bottom right coordinates of a given device screen while the output y∈Rn×4 consists of n views and their absolute positions.\nFinding a distinguishing input and candidate outputs Because InferUI uses symbolic search, finding a distinguishing input and candidate outputs is encoded as a logical query solved by the synthesizer as described in Section 3. However, instead of synthesizing the layout program containing correct outputs of all the views at once (as done in InferUI), we run the synthesizer n times, each time predicting the correct output for only a single view (starting from the largest view) which is then added as an additional input-output example (we provide a concrete example in Appendix C.4). This is necessary since there are exponentially many combinations of the view positions when considering all the views at once and the InferUI synthesizer is not powerful enough to include the correct one in the set of candidate outputs (e.g., for samples with more than 10 views in less than 4%). The advantage of allowing the position of only a single view to change, while fixing the position of all prior views, is that the correct candidate output is much easier to include. The disadvantage is that the neural oracle only has access to partial information (consisting of the prior view positions) and therefore performs a sequence of greedy predictions rather than optimizing all view positions jointly.\nNeural oracle Because the input x has the same dimensions as each view, we encode it as an additional view in all our network architectures. In Figure 2 we show three different neural architectures that implement the oracle function f , each of which uses a different way to encode the input-output example into a hidden representation followed by a fully-connected ReLU layer and a softmax that computes the probability that the output is correct. In the following, we describe the architecture of all the models. The formal feature definitions are included in Appendix A.\nIn (CNN), the output is converted to an image (each view is drawn as a rectangle with a 1px black border on a white background) and used as an input to a convolutional neural network (CNN) with 3 convolutional layers with 5×5 filters of size 64, 32 and 16 and max pooling with kernel size 2 and stride 2. To support outputs computed for different screen sizes, the image dimensions are slightly larger than the largest device size. We regularize the network during training by positioning the outputs with a random offset such that they are still fully inside the image, instead of placing them in the center. This is possible as the image used as input to the CNN is larger than the device size. We provide visualization of this regularization in Appendix C.2.\nIn (MLP), the output is transformed to a normalized feature vector and then fed into a feedforward neural network with 3 hidden layers of size 512 with ReLU activations. To encode the properties of a candidate output y, we use high-level handcrafted features adapted from InferUI such as the number of view intersections or whether the views are aligned or centered. By instantiating the features for both horizontal and vertical orientation we obtain a vector of size 30, denoted as ϕMLP(x∗, y∗), which is used as input to the neural network. For the model f(3 | x∗, y∗, I) the network input is a concatenation of output features (as before) ϕMLP(x∗, y∗), features of each sample in the input specification [ϕMLP(x, y)](x,y)∈I , and their difference [ϕMLP(x∗, y∗)−ϕMLP(x, y)](x,y)∈I . This difference captures how the views have been resized or moved between the devices with different screen dimensions. This allows the model to distinguish between outputs that are all likely when considered in isolation but not when also compared to examples in I (as illustrated in Appendix C.3). In (RNN), we use the fact that each output consists of a set of views y∗ = [v1; . . . ; vn] by using an encoder to first compute a hidden representation of each view. These are then combined with a LSTM to compute the representation of the whole output. To encode a view vi, we extract pairwise feature vectors with all other views (including the input) ϕRNN(vi, x∗, y∗) = [φ(vi, vj)]vj∈{x∗}∪y∗\\vi and combine them using a LSTM. Here, φ : R4 × R4 → Rn is a function that extracts n real valued features from a pair of views. For each pair of views, we apply 11 simple transformations of the view coordinates, capturing their distance (4 vertical, 4 horizontal), the size difference (in width and height) and the ratio of the aspect ratios. For the model f(3 | x∗, y∗) we additionally use 17 highlevel features computed for each view that are adapted from InferUI. When using f(3 | x∗, y∗, I), these additional high-level features are not required and instead we only use the 11 simple transformations combined in the same way as for the MLP model. That is, by concatenating ϕRNN(vi, x∗, y∗), [ϕRNN(vi, x, y)](x,y)∈I and their difference [ϕRNN(vi, x∗, y∗)− ϕRNN(vi, x, y)](x,y)∈I .\nDatasets To train our models we obtained three datasets DU , DS and D+S , each containing an increasing amount of information at the expense of being harder to collect.\nThe unsupervisedDU = {(xi, yi)}Ni=1 is the simplest dataset and contains only positive input-output samples obtained by sampling ≈ 22, 000 unique screenshots (including the associated metadata of all the absolute view positions) of Google Play Store applications taken from the Rico dataset (Deka et al., 2017). Since the screenshots always consist of multiple layout programs combined together,\nwe approximate the individual programs by sorting the views in decreasing order of their size and taking a prefix of random length (of up to 30 views). For all of the datasets, we deduplicate the views that have the same coordinates and filter out views with a negative width or height.\nThe supervisedDS = {(3, xi, yi)}(xi,yi)∈DU ∪ {⋃≤15 j=1{(7, xi, yi+ ij)} } (xi,yi)∈DU\ncontains both correct and incorrect input-output examples. In our work this dataset is produced synthetically from DU by extending it with incorrect samples. Concretely, the positive samples correspond to those in the dataset DU and for each positive sample we generate up to 15 negative samples by applying a transformation ij to the correct output. The transformations considered in our work are sampled from the common mistakes the synthesizer can make – resizing a view, shifting a view horizontally, shifting a view vertically or any combination of the above.\nThe supervised dataset isDS+ = {(3, xi, yi, Ii)}Ni=1∪{(7, xj , yj , Ij)}Mj=1, where each Ii contains the same application rendered on multiple devices. We downloaded the same applications as used in the Rico dataset from the Google Play Store and executed them on three Android emulators with different device sizes. The number of valid samples is ≈ 600 since not all applications could be downloaded, executed or produced the same set of views (or screen content) when executed on three different devices. The negative examples are generated by running the synthesizer with the input specification I containing a single sample and selecting up to 16 outputs that are inconsistent with the ground-truth output for the other devices." }, { "heading": "5 EVALUATION", "text": "We evaluate our approach by applying it to an existing Android layout synthesizer called InferUI (Bielik et al., 2018) as described in Section 4. InferUI is a symbolic synthesizer which encodes the synthesis problem as a set of logical constraints that are solved using the state-of-the-art SMT solver Z3 (De Moura & Bjørner, 2008). To improve generalization, InferUI already implements two techniques – a probabilistic model that selects the most likely program among those that satisfy the input specification, and a set of handcrafted robustness constraints φ(p) that prevent synthesizing layouts which violate good design practices. We show that even if we disable these two optimizations and instead guide the synthesizer purely by extending the input specification with additional input-output examples, we can still achieve an accuracy increase from 35% to 71%.\nIn all our experiments, we evaluate our models and InferUI on a test subset of the DS+ dataset which contains 85 Google Play Store applications, each of which contains the ground truth of the absolute view positions on three different screen dimensions – 1400×2520, 1440×2560 and 1480×2600. We use one screen dimension as the input specification I, the second as the distinguishing input and the third one only to compute the generalization accuracy. The generalization accuracy of a synthesized program p |= I is defined as the percentage of views which the program p renders at the correct position.\nInferUI Baseline To establish a baseline, we run InferUI in three modes as shown in Table 1. The baseline mode returns the first program that satisfies the input specification, denoted as p |= I, and achieves only 15.5% generalization accuracy. In the second mode the synthesizer returns the most likely program according to a probabilistic model q(p | I) which leads to an improved accuracy of 24.7%. The third mode additionally defines a set of robustness properties φ(p) that the synthesized program needs to satisfy, which together with the probabilistic model achieve 35.2% accuracy. The generalization accuracy of all InferUI models is relatively low as we are using a challenging dataset where each sample contains on average 12 views. Note however, that this is expected since increasing the number of views leads to an exponentially larger hypothesis space.\nFurther, to establish an upper bound on how effective a candidate ranking approach can be, we query the synthesizer for up to 100 different candidate programs (each producing a unique output) and check how often the correct program is included. While for small samples (with up to 4 views) the correct program is almost always included, for samples with 6 views it is among the synthesized candidates in only 30% of the cases and for samples with more than 10 views in less than 4%. Sampling more outputs will help only slightly as increasing the number of views would require generating exponentially more candidates.\nOur Work We apply our approach to the InferUI synthesizer by iteratively generating additional input-output examples that strengthen the input specification. The specification initially contains absolute positions of all the views for one device and we extend the specification by adding one view at a time (rendered on a different device) as described in Section 4. In the experiments presented here we focus on evaluating the overall improvement of the InferUI synthesizer extended with our approach. We provide additional experiments that evaluate the effectiveness of the neural oracle as well as an ablation study in Appendix B.\nGeneralization Accuracy The results of our approach instantiated with various neural oracle models are shown in Table 2. The best model trained on the dataset DS has almost the same accuracy as InferUI with all its optimizations enabled. This means that it is possible to replace existing optimizations and handcrafted robustness constraints by training on an easy to obtain dataset consisting of correct outputs and their perturbations. More importantly, when training on the harder to obtain dataset DS+, the generalization accuracy more than doubles to 71% since the model can also condition on the input specification I. However, the results also show that the design of the model using the neural oracle is important for achieving good results. In particular, both MLP models achieve poor accuracy since the high level handcrafted features adapted from the InferUI synthesizer are not expressive enough to distinguish between correct and incorrect outputs. The CNN models achieve better accuracy but are limited for the opposite reason, they try to learn all the relevant features from the raw pixels which is challenging as many features require pixel level accuracy across large distances (e.g., whether two views in the opposite parts of the screen are aligned or centered). The RNN model performs the best, especially when also having access to the input specification. Even though it also processes low level information, such as distance or size difference between the views, it uses a more structured representation that first computes individual view representations that are combined to capture the whole output.\nNumber of Candidate Outputs We show the effect of different threshold values t used by the neural oracle to dynamically control whether to search for more candidate outputs as well as the maximum number of candidate outputs in Table 3. We can see that using the threshold both slightly improves the accuracy (+0.3%) but more importantly, significantly reduces the average number of generated candidate outputs from 14.7 to 6.0 using the same number of maximum generated outputs |y| = 16.\nIncorporating User Feedback Even though our approach significantly improves over the InferUI synthesizer, it does not achieve perfect generalization accuracy. This is because for many synthesizers the perfect generalization is usually not achievable – the correct program and its outputs depends on a user preference, which is only expressed as severely underspecified set of input-output examples. For example, for a given input specification there are often multiple good layout programs that do not violate any design guidelines and which one is chosen depends on a particular user. To achieve 100% in practice, we perform an experiment where the user can inspect the input-output examples generated by our approach and correct them if needed. Then, we simply count how many corrections were required. The applications in our dataset have on average 12 views and for our best model, no user corrections are required in 30% of the cases and in 27%, 15%, 12%, 5% of the cases the user needs to provide 1, 2, 3 or 4 corrections, respectively. In contrast, InferUI with all optimizations enabled requires on average twice as many user corrections and achieves perfect generalization (i.e., zero user corrections) in only 3.5% of the cases." }, { "heading": "6 CONCLUSION", "text": "In this work we present a new approach to improve the generalization accuracy of existing program synthesizers. The main components of our method are: (i) an existing program synthesizer, (ii) a refinement loop around that synthesizer, which uses a neural oracle to iteratively extend the input specification with new input-output examples, and (iii) a neural oracle trained using an easy to obtain dataset consisting of program outputs. To show the practical usefulness of our approach we apply it to an existing Android layout synthesizer called InferUI (Bielik et al., 2018) and improve its generalization accuracy by 2×, from 35% to 71%, when evaluated on a challenging dataset of real-world Google Play Store applications." }, { "heading": "APPENDIX", "text": "We provide three appendices. Appendix A contains an in-depth description of the feature transformations for the MLP and RNN models. Appendix B contains an additional ablation experiment as well as results from the oracle evaluation. Appendix C provides visualizations of positive and negative candidate outputs, the CNN regularization as well as outputs from the end-to-end synthesis." }, { "heading": "A FEATURE DEFINITIONS", "text": "In this section we describe in detail the feature transformations used in Section 4. As mentioned in Section 4 each view consists of the 4 coordinates: xl (xleft), yt (ytop), xr (xright), yb (ybottom). We use v.w for the width, v.h for the height and v.r for the aspect ratio (v.w/v.h) of view v." }, { "heading": "A.1 MLP", "text": "In Table 4 we define the 7 feature types that lead to a vector of size 30 used in the MLP model. All the features are normalized (divided) by the factor in the normalization column." }, { "heading": "A.2 RNN", "text": "We formally define the 11 transformations used as the pairwise view feature vector φ : R4×R4→Rn in the RNN model. These features capture properties like the view’s distance to the other view or the size difference. The feature vectors extracted for each pair of views are combined to a fixed length representation by passing them through the LSTM, in the decreasing order of view size. In Figure 3 all the properties are listed and visualized.\nTo guide the model for f(3 | x∗, y∗), we defined 17 more abstract features, adapted from InferUI, which are defined using the simple transformations shown in Figure 3. Concretely, we define the following 17 features:\n• For alignments (8 features: 4 horizontal, 4 vertical): Compare if view v1 is aligned with v2, such that one of the 4 distance functions is 0, e.g. dll(v1, v2) = 0\n• For centering (2 features: 1 horizontal, 1 vertical): Compare if view v1 is centered in v2, such that dll(v1, v2) = −drr(v1, v2).\n• For overlaps (4 features: 2 horizontal, 2 vertical): Check if the v1 and v2 can possibly intersect, i.e. have overlapping x-coordinates in the horizontal case: v1.xl ≥ v2.xl and v1.xl ≤ v2.xr.\n• For the same size (2 features): Compare if the height or width difference of v1 and v2 is equal to 0, such that sw(v1, v2) = 0 and sh(v1, v2) = 0.\n• For the same ratio (1 feature): Compare if the ratio v1 and v2’s aspect ratios is 1, such that r(v1, v2) = 1." }, { "heading": "B EXPERIMENTS", "text": "In this appendix, we evaluate the oracle performance and include an ablation study of our models." }, { "heading": "B.1 ORACLE EVALUATION", "text": "We evaluate the oracle’s performance (shown in Table 5) using two metrics: The pairwise accuracy and one vs. many accuracy. The pairwise accuracy is the binary ranking problem – for each pair consisting of a positive and negative candidate, it is considered to be correct, if the score of the positive example is higher than of the negative example (the pair was ranked correctly). To capture if the network can’t distinguish between the samples, we also report a second value in square brackets in which we count as correct if the score of the positive example is higher or equal to the score of the negative example. In addition, we measure the one vs. many accuracy, in which the oracle is correct if the score of the correct candidate was higher than the scores of all incorrect candidates (since selecting the correct candidate out of many candidates represents what we are interested in during the end-to-end synthesis experiment).\nUsually, models with a higher pairwise accuracy also have a higher one vs. many accuracy since more pairs are ranked correctly. The one vs. many accuracy might be lower than expected if the mistakes of the pairwise accuracy spread over a wide number of different samples (instead of having many mistakes in a small set of samples).\nIn general, the results of the one vs. many accuracy correlate with the results of the end-to-end synthesis. For example, RNN+CNN and RNN trained on DS+ achieve the highest scores in both of the experiments. One exception is the MLP model trained onDS+ dataset which performs worse on the end-to-end synthesis experiment even though the oracle performs better. A reason for this is that the MLP often makes mistakes when predicting the early views which then affect the positioning of the subsequent views. For the same reason, the accuracies of the oracle experiment are in general higher than the accuracy we observe in the synthesis experiment. Another interesting insight is that the MLP often scores the positive and the negative examples the same since there is a large difference in the pairwise accuracy depending on if we count the same score as correct or not. This shows that the MLP’s feature functions are not expressive enough." }, { "heading": "B.2 ABLATION STUDY", "text": "To investigate which high-level properties are learned by our models, we generate an ablation dataset from the test dataset of DS+. The results of the study are collected in Table 6 and Table 7. Recall from Section 4 that ϕMLP(x, y) computes a vector of size 30 with the handcrafted functions which are described in detail in Appendix A.1. For each pair of positive (x∗, ypos) and negative examples (x∗, yneg) and their corresponding sample in the input specification (x, y) we compute the difference of their handcrafted feature functions d = [ϕMLP(x∗, ypos) − ϕMLP(x, y)](x,y)∈I −\n[ϕMLP(x ∗, yneg)− ϕMLP(x, y)](x,y)∈I . We add a pair to the ablation dataset for the violated property if d 6= 0, that is, if the property on the positive sample is different from the negative sample. For example, if the number of view intersections of the positive sample and input specification are the same, but different for the negative sample, we add this pair of positive and negative samples to the intersections-ablation dataset.\nThe MLP models perform very well on all the ablation properties. The reason is that MLP’s features are the handcrafted features which are designed to capture the properties in the ablation dataset. The MLP’s performance on the whole test dataset is worse, which indicates that some properties in the dataset are not captured by the handcrafted features.\nThe CNN fails at preserving the aspect ratios due to the pooling and the downsampling operations. Alignments and intersection properties can be learned with local filters detecting overlapping views. Off-screen views can be detected by checking the downsampled representation after the convolutional layers. On these three properties, the CNN performs best. The RNN performs better on the ablation dataset than the CNN since it processes the exact numeric values of the view coordinates.\nThe overall performance on the ablation dataset is better than the one on the whole test dataset for all models, indicating that there are some features not expressed in the handcrafted functions. Furthermore, the results of the ablation study should be considered carefully: the ablation study shows tendencies, but features could be correlated or have common hidden features.\nIn this section, we provide visualizations of two concepts introduced in Section 4 – the rendered output candidates and how the CNN input is shifted for regularization. Further, we visualize the steps of the end-to-end synthesis and render outputs of synthesized programs." }, { "heading": "C.1 POSITIVE AND NEGATIVE OUTPUT CANDIDATES", "text": "Figure 4 shows four different rendered output candidates. The correct candidate is on the left and the three incorrect ones are on the right. The candidates differ in their 6th view which is moved around and overlaps in all of the three incorrect candidates. The three negative examples are generated by the synthesizer as described in Section 4 (for dataset DS+) or applying perturbations sampled from the synthesizer mistakes (for dataset DS)." }, { "heading": "C.2 CNN REGULARIZATION", "text": "Figure 5 visualizes how the robustness of the CNN is increased by placing the input randomly within the input image as described in Section 4.\nC.3 ADVANTAGE OF HAVING THE ADDITIONAL INPUT SPECIFICATION IN THE DS+ DATASET\nFigure 6 visualizes the absolute view positions in the input specification on the left and the four candidates (differing in the red view) on the distinguishing device x∗ on the right. Looking at the four candidates, the first candidate is incorrect, since the red view is not centered. The third candidate is not the correct one either, since the red view intersects with the view below. It is hard to decide between the second and fourth candidate since the red view is centered in both cases. When taking the input specification into consideration, the second candidate is the correct one, since there is no space between the red view and the view below in candidate 4 (unlike in second candidate and the input specification).\n||Larissa Laich 10.07.2019 63\nMODELS - INPUT/OUTPUT Output Candidates\nCandidate 1 Not centered\nCandidate 2 Candidate 3 Intersects Candidate 4Input specification\nFigure 6: The input specification provides an important indication that candidate 2 is the correct one (unlike candidate 4)." }, { "heading": "C.4 END-TO-END SYNTHESIS PROCEDURE", "text": "In Figure 7 we present an example of the end-to-end synthesis described in Section 3 and Section 4. The input of the synthesizer is on the top left of the Figure 7 and consists of an input specification I with a single input-output pair (x, y). The synthesis loop starts by the synthesizer generating a candidate program p1 |= I (not shown) and checking whether ambiguities exists. To do this, it finds a distinguishing input x∗, in our example a smaller device x∗ = [0, 0, 1400, 2520] and a set of programs p2, . . . , pn that satisfy the input specification I but produce different outputs when evaluated on x∗. Three possible candidate outputs p1(x∗), p2(x∗), p3(x∗) are visualized in Figure 7 (column Iteration 1). Note that as described in Section 4, the synthesis proceeds by synthesizing the views iteratively therefore, the first iteration contains only the first view v1. The oracle (in this example the CNN+RNN model trained on DS+) selects the second candidate (the oracle prediction is indicated by the dashed rectangle) since its score is the largest with 0.88. The position of v1 on the device x∗ is added to the specification and the iterative synthesis proceeds by selecting the correct output for the second view. In the second iteration we perform the same steps, except now we synthesize different candidate outputs for the second view v2 on the device x∗, given that the position of the first view v1 is fixed from the previous iteration. The same process is repeated until the absolute positions of all the views are predicted for the distinguishing input x∗. The new input specification, shown at the bottom in Figure 7, contains the absolute view positions for both the original input x as well as the distinguishing input x∗. After that, the whole process repeats by querying for another distinguishing device and extending the input specification even further.\nNote that in our evaluation we restrict the distinguishing input to be among those device sizes for which we trained a model. Since our datasets consist of three different screen sizes the choice is small and all the experiments in Section 5 are evaluated with a single distinguishing input x∗. However, we could generate more than three dimensions for the training dataset and use them to train additional models. This was however not needed as three different devices are typically enough to resolve majority of ambiguities (as long as the oracle does not make mistakes)." }, { "heading": "C.4.1 LIMITATIONS OF THE MLP FEATURES", "text": "Figure 8 visualizes eight candidate outputs in the first synthesis iteration for the same application as in Figure 7. The MLP oracle assigns a score of 0 to all the output candidates in which view v1 is off-screen. However, the handcrafted features are not expressive enough to distinguish between the candidate outputs in the bottom row which all have a score of 0.76. In general, the MLP performs bad in selecting the first views (earlier iterations) since there are only few views from which properties e.g. margins can be extracted. On the bottom right, the synthesis result is compared to the correct output. Since the first view was not selected correctly in the synthesis, all the subsequent inner views are also slightly off in comparison to the correct output." }, { "heading": "C.5 EXAMPLES OF SYNTHESIS OUTPUTS", "text": "Figure 9 visualizes the rendered output of the synthesizer for the distinguishing device x∗. The correct output is on the left while the synthesis output of the different oracles is on the right. At first glance, the outputs of the MLP look visually appealing since the handcrafted features contain properties like intersections or off-screen and the model learns to avoid them. However, it is often not expressive enough, since the view sizes or positions are often different to the correct output. This happens in particular if there are only few views which is why the second view is often misplaced which leads to consecutive errors. This also explains the bad performance in the synthesis experiment in Table 1 in comparison to the oracle experiment in Table 5. The CNN outputs look much worse and often contain views which are misaligned or not centered. The RNN performs quite well one the first and third application but fails on the second application. The RNN+CNN predicts the correct output for the first application but fails on five views (out of 22) in the second and on seven (out of 17) views in the third application." }, { "heading": "Correct Output MLP CNN RNN CNN+RNN", "text": "" } ]
2,020
GUIDING PROGRAM SYNTHESIS BY LEARNING TO GENERATE EXAMPLES
SP:ca085e8e2675fe579df4187290b7b7dc37b8a729
[ "In this paper, the authors address few-shot learning via a precise collaborative hallucinator. In particular, they follow the framework of (Wang et al., 2018), and introduce two kinds of training regularization. The soft precision-inducing loss follows the spirit of adversarial learning, by using knowledge distillation. Additionally, a collaborative objective is introduced as middle supervision to enhance the learning capacity of hallucinator. ", "This paper describes a method that builds upon the work of Wang et al. It meta-learns to hallucinate additional samples for few-shot learning for classification tasks. Their two main insights of this paper are to propose a soft-precision term which compares the classifiers' predictions for all classes other than the ground truth class for both a few-shot training set and the hallucinated set and b) to introduce the idea of applying direct early supervision in the feature space in which the hallucination is conducted in addition to in the classifier embedding space. This allows for stronger supervision and prevents the hallucinated samples from not being representative of the classes. The authors show small, but consistent improvement in performance on two benchmarks: ImageNet and miniImageNet with two different network architectures versus various state-of-the-art meta-learning algorithms with and without hallucination. The authors have adequately cited and reviewed the existing literature. They have also conducted many experiments (both in the main paper and in the supplementary material) to show the superior performance of their approach versus the existing ones. Furthermore their ablation studies both for the type of soft precision loss and for their various individual losses are quite nice and thorough. " ]
Learning to hallucinate additional examples has recently been shown as a promising direction to address few-shot learning tasks, which aim to learn novel concepts from very few examples. The hallucination process, however, is still far from generating effective samples for learning. In this work, we investigate two important requirements for the hallucinator — (i) precision: the generated examples should lead to good classifier performance, and (ii) collaboration: both the hallucinator and the classification component need to be trained jointly. By integrating these requirements as novel loss functions into a general meta-learning with hallucination framework, our model-agnostic PrecisE Collaborative hAlluciNator (PECAN) facilitates data hallucination to improve the performance of new classification tasks. Extensive experiments demonstrate state-of-the-art performance on competitive miniImageNet and ImageNet based few-shot benchmarks in various scenarios.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Evgeniy Bart", "Shimon Ullman" ], "title": "Cross-generalization: Learning novel classes from a single example by feature replacement", "venue": "In CVPR,", "year": 2005 }, { "authors": [ "Jonathan Baxter" ], "title": "A Bayesian/information theoretic model of learning to learn via multiple task sampling", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Samy Bengio", "Yoshua Bengio", "Jocelyn Cloutier", "Jan Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "In Preprints Conf. Optimality in Artificial and Biological Neural Networks,", "year": 1992 }, { "authors": [ "Luca Bertinetto", "João F Henriques", "Jack Valmadre", "Philip Torr", "Andrea Vedaldi" ], "title": "Learning feedforward one-shot learners", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Zitian Chen", "Yanwei Fu", "Yu-Xiong Wang", "Lin Ma", "Wei Liu", "Martial Hebert" ], "title": "Image deformation meta-networks for one-shot learning", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Mandar Dixit", "Roland Kwitt", "Marc Niethammer", "Nuno Vasconcelos" ], "title": "AGA: Attribute-Guided Augmentation", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Matthijs Douze", "Arthur Szlam", "Bharath Hariharan", "Hervé Jégou" ], "title": "Low-shot learning with large-scale diffusion", "venue": null, "year": 2018 }, { "authors": [ "Nikita Dvornik", "Cordelia Schmid", "Julien Mairal" ], "title": "Diversity with cooperation: Ensemble methods for few-shot classification", "venue": "arXiv preprint arXiv:1903.11341,", "year": 2019 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": null, "year": 2006 }, { "authors": [ "Michael Fink" ], "title": "Acquiring a new class from a few examples: Learning recurrent domain structures in humans and machines", "venue": "PhD thesis, The Hebrew University of Jerusalem,", "year": 2011 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Hang Gao", "Zheng Shou", "Alireza Zareian", "Hanwang Zhang", "Shih-Fu Chang" ], "title": "Low-shot learning via covariance-preserving adversarial augmentation networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Dileep George", "Wolfgang Lehrach", "Ken Kansky", "Miguel Lázaro-Gredilla", "Christopher Laan", "Bhaskara Marthi", "Xinghua Lou", "Zhaoshi Meng", "Yi Liu", "Huayan Wang", "Alex Lavin", "D. Scott Phoenix" ], "title": "A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs", "venue": "Science,", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Jongmin Kim", "Taesup Kim", "Sungwoong Kim", "Chang D Yoo" ], "title": "Edge-labeling graph neural network for few-shot learning", "venue": null, "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhudtinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML Deep Learning Workshop,", "year": 2015 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Josh B Tenenbaum" ], "title": "One-shot learning by inverting a compositional causal process", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": null, "year": 2019 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-SGD: Learning to learn quickly for few-shot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Erik G Miller", "Nicholas E Matsakis", "Paul A Viola" ], "title": "Learning from one example through shared densities on transforms", "venue": "In CVPR,", "year": 2000 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearning", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Alex Nichol", "John Schulman" ], "title": "Reptile: A scalable metalearning algorithm", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "TADAM: Task dependent adaptive metric for improved few-shot learning", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Hang Qi", "Matthew Brown", "David G Lowe" ], "title": "Low-shot learning with imprinted weights", "venue": null, "year": 2018 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ruslan Salakhutdinov", "Josh Tenenbaum", "Antonio Torralba" ], "title": "One-shot learning with a hierarchical nonparametric Bayesian model", "venue": "Unsupervised and Transfer Learning Challenges in Machine Learning,", "year": 2012 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training GANs", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "One-shot learning with memory-augmented neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-.", "venue": "hook. Diploma thesis, Institut f. Informatik, Tech. Univ. Munich,", "year": 1987 }, { "authors": [ "Jürgen Schmidhuber", "Jieyu Zhao", "Marco Wiering" ], "title": "Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Lauren A Schmidt" ], "title": "Meaning and compositionality as statistical induction of categories and constraints", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2009 }, { "authors": [ "Eli Schwartz", "Leonid Karlinsky", "Joseph Shtok", "Sivan Harary", "Mattias Marder", "Abhishek Kumar", "Rogerio Feris", "Raja Giryes", "Alex Bronstein" ], "title": "Delta-encoder: an effective sample synthesis method for few-shot object recognition", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Konstantin Shmelkov", "Cordelia Schmid", "Karteek Alahari" ], "title": "How good is my GAN", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip H.S. Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": null, "year": 2018 }, { "authors": [ "Sebastian Thrun" ], "title": "Lifelong learning algorithms", "venue": "Learning to learn,", "year": 1998 }, { "authors": [ "Eleni Triantafillou", "Richard Zemel", "Raquel Urtasun" ], "title": "Few-shot learning through an information retrieval lens", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "JMLR, 9:2579–2605,", "year": 2008 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy P. Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Yu-Xiong Wang", "Martial Hebert" ], "title": "Learning to learn: Model regression networks for easy small sample learning", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Learning to model the tail", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Yu-Xiong Wang", "Ross Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": null, "year": 2018 }, { "authors": [ "Alex Wong", "Alan L Yuille" ], "title": "One shot learning via compositions of meaningful patches", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Yongqin Xian", "Saurabh Sharma", "Bernt Schiele", "Zeynep Akata" ], "title": "f-VAEGAN-D2: A feature generating framework for any-shot learning", "venue": null, "year": 2019 }, { "authors": [ "Wang" ], "title": "2018), to make a single hallucinator robust to different sample sizes, we randomly sample different sized examples per class from S∗", "venue": null, "year": 2018 }, { "authors": [ "Wang" ], "title": "Additional visual comparisons of top-1 classification results on four representative novel classes between our PECAN and the state-of-the-art meta-learned hallucinator", "venue": "(Wang et al.,", "year": 2020 }, { "authors": [ "Wang" ], "title": "2018) are overlaid on the images), but correctly classified by PECAN; right 3 columns: test images from other classes that are misclassified by Wang et al. (2018) as the target class, but correctly classified by PECAN. Our approach is able to model a large range of visual variations and diversity", "venue": null, "year": 2018 }, { "authors": [ "Finn" ], "title": "Comparison on the n = 5-shot sinusoidal regression task", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern deep learning models rely heavily on large amounts of annotated examples (Deng et al., 2009). Their data-hungry nature limits their applicability to real-world scenarios, where the cost of annotating examples is prohibitive, or they involve rare concepts (Zhu et al., 2014; Fink, 2011). In contrast, humans can grasp a new concept rapidly and make meaningful generalizations, even from a single example (Schmidt, 2009). To bridge this gap, there has been a recent resurgence of interest in few-shot learning that aims to learn novel concepts from very few labeled examples (Fei-Fei et al., 2006; Vinyals et al., 2016; Wang & Hebert, 2016; Snell et al., 2017; Finn et al., 2017).\nExisting work tries to solve this problem from the perspective of meta-learning (Thrun, 1998; Schmidhuber, 1987), which is motivated by the human ability to leverage prior experiences when tackling a new task. Unlike the standard machine learning paradigm, where a model is trained on a set of exemplars, meta-learning is performed on a set of tasks, each consisting of its own training and test sets (Vinyals et al., 2016). By sampling small training and test sets from a large collection of labeled examples of base classes, meta-learning based few-shot classification approaches learn to extract task-agnostic knowledge, and apply it to a new few-shot learning task of novel classes.\nOne notable type of task-agnostic (or meta) knowledge comes from the shared mechanism of data augmentation or hallucination across categories (Wang et al., 2018; Gao et al., 2018; Schwartz et al., 2018; Zhang et al., 2018a). Hallucinating additional training data by generating images may seem like an easy solution for few-shot learning, but it is often challenging. In fact, the success of this paradigm is usually restricted to certain domains like handwritten characters (Lake et al., 2013), or requires additional supervision (Dixit et al., 2017; Zhang et al., 2018b) or sophisticated heuristics (Hariharan & Girshick, 2017). An alternative to generating raw data in the form of visually realistic images is to hallucinate examples in a learned feature space (Wang et al., 2018; Gao et al., 2018; Schwartz et al., 2018; Zhang et al., 2018a; Xian et al., 2019). This can be achieved by, for example, integrating a “hallucinator” module into a meta-learning framework, where it generates hallucinated examples, guided by real examples (Wang et al., 2018). The learner then uses an augmented training set which includes both the real and the hallucinated examples to learn classifiers. While the existing approaches showed that it is possible to adjust the hallucinator to generate examples that are helpful for classification, the generation process is still far from producing effective samples in the few-shot regime. Our key insight is that, to facilitate data hallucination to improve the performance of new classification tasks, two important requirements should be satisfied: (i) precision: the generated\nexamples should lead to good classifier performance, and (ii) collaboration: all the components including the hallucinator and the learner need to be trained jointly.\nIn this work, we propose PrecisE Collaborative hAlluciNator (PECAN), which integrates these requirements into a general meta-learning with hallucination framework, as shown in Figure 1. Assume that we have a hallucinator to generate additional examples from the original small training set. A precise hallucinator indicates that a classifier trained on both the hallucinated and the few real examples should produce superior validation accuracy. This can be achieved by training the hallucinator end-to-end with the learner, and back-propagating a classification loss based on groundtruth labels of validation data (Wang et al., 2018). Since this precision is measured using ground-truth labels, we term it as hard precision. And more importantly, if the hallucinator perfectly captures the target distribution, a classifier trained on a set of hallucinated examples, despite being generated from a small set of real examples, should produce roughly the same validation accuracy as a classifier trained on a large set of real examples, when these two sets are of the same sample size (Shmelkov et al., 2018). This indicates similar level of realism and diversity between the generated and the real examples, as shown in Figure 1a. Motivated by this observation, we introduce an additional precision-inducing loss function, which explicitly encourages the hallucinator to generate examples so that a classifier trained on them makes predictions similar to the one trained on a large amount of real examples. Given that this precision is measured based on classifier predictions, we term it as soft precision. This precision, which is complementary to hard precision and effective, as shown in our experiment, is lacking in current approaches (Wang et al., 2018).\nSatisfying the precision requirement alone is not sufficient, since the classification objective is still directly associated with the learner, and thus the hallucinator continues to rely on the back-propagated signal to update its parameters. This leads to a potential undesirable effect of imbalanced training between the hallucinator and the learner: the learner tends to be stronger and makes allowances for errors in the hallucination, whereas the hallucinator becomes “lazy” and does not make its best effort to capture the data distributions, which is empirically observed in our experiments (See Figure 3). To address this issue, our key insight is to enforce direct and early supervision for the hallucinator, and make its contribution to the overall classification transparent, as shown in Figure 1b. Hence, we introduce a collaborative objective for the hallucinator, which allows us to directly influence the generation process to favor highly discriminative examples right after hallucination, and to strengthen the cooperation between the hallucinator and the learner.\nOur contributions are three-fold. (1) We propose a novel loss that helps produce precise hallucinated examples, by using the classifier trained on real examples as a guidance, and encouraging the classifier trained on hallucinated examples to mimic its behavior. (2) We introduce a collaborative objective for the hallucinator as early supervision, which directly facilitates the generation process and improves the cooperation between the hallucinator and the learner. (3) By integrating these properties, we develop a general meta-learning with hallucination framework, which is model-agnostic and can be combined with any meta-learning models to consistently boost their few-shot learning performance.\nHere we mainly focus on few-shot classification tasks, and we show that our approach applies to few-shot regression tasks as well in the appendix A.7." }, { "heading": "2 RELATED WORK", "text": "As one of the unsolved problems in machine learning and computer vision, few-shot learning is attracting growing interest in the deep learning era (Miller et al., 2000; Fei-Fei et al., 2006; Lake et al., 2015; Santoro et al., 2016; Wang & Hebert, 2016; Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017; Hariharan & Girshick, 2017; George et al., 2017; Triantafillou et al., 2017; Edwards & Storkey, 2017; Mishra et al., 2018; Douze et al., 2018; Wang et al., 2018; Chen et al., 2019a; Dvornik et al., 2019). Successful generalization from few training samples requires appropriate “inductive biases” or shared knowledge from related tasks (Baxter, 1997), which is commonly acquired through transfer learning and more recently meta-learning (Thrun, 1998; Schmidhuber, 1987; Schmidhuber et al., 1997; Bengio et al., 1992). By explicitly “learning-to-learn” over a series of few-shot learning tasks (i.e., episodes), which are simulated from base classes, meta-learning exploits accumulated task-agnostic knowledge to target few-shot learning problems of novel classes. Within this paradigm of approaches, various types of meta-knowledge has been recently explored, including (1) a generic feature embedding or metric space, in which images are easy to classify using a distance-based classifier such as cosine similarity or nearest neighbor (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Ren et al., 2018; Oreshkin et al., 2018); (2) a common initialization of network parameters (Finn et al., 2017; Nichol & Schulman, 2018; Finn et al., 2018) or learned update rules (Andrychowicz et al., 2016; Ravi & Larochelle, 2017; Munkhdalai & Yu, 2017; Li et al., 2017; Rusu et al., 2019); (3) a transferable strategy to estimate model parameters based on few novel class examples (Bertinetto et al., 2016; Qiao et al., 2018; Qi et al., 2018; Gidaris & Komodakis, 2018), or from an initial small dataset model (Wang & Hebert, 2016; Wang et al., 2017).\nComplementary to these discriminative approaches, our work focuses on synthesizing samples to deal with data scarcity. There has been progress in this direction of data hallucination, either in pixel or feature spaces (Salakhutdinov et al., 2012; George et al., 2017; Lake et al., 2013; 2015; Wong & Yuille, 2015; Rezende et al., 2014; Goodfellow et al., 2014; Radford et al., 2016; Dixit et al., 2017; Hariharan & Girshick, 2017; Wang et al., 2018; Gao et al., 2018; Schwartz et al., 2018; Zhang et al., 2018a). However, it is still challenging for modern generative models to capture the entirety of data distribution (Salimans et al., 2016) and produce useful examples that maximally boost the recognition performance (Wang et al., 2018), especially in the small sample-size regime. In the context of generative adversarial networks (GANs), Shmelkov et al. (2018) show that images synthesized by state-of-the-art approaches, despite their impressive visual quality, are insufficient to tackle recognition tasks, and encourage the use of quantitative measures based on classification results to evaluate GAN models. Rather than using classification results as a performance measure, we go a step further in this paper by leveraging classification objectives to guide the generation process.\nOther related work such as Wang et al. (2018) proposed a general data hallucination framework based on meta-learning, which is a special case of our approach. A GAN-like hallucinator takes a seed example and a random noise vector as input to generate a new sample. This hallucinator is trained jointly with the classifier in an end-to-end manner. Delta-encoder (Schwartz et al., 2018) is a variant of Wang et al. (2018), where instead of using noise vectors, it modifies an auto-encoder to extract transferable intra-class deformations, i.e., “deltas”, and applies them to novel samples to generate new instances. Unlike the above approaches that directly use the produced samples to train the classifier, MetaGAN (Zhang et al., 2018a) trains the classifier in an adversarial manner to augment the classifier with the ability to discriminate between real and synthesized data. Another variant (Gao et al., 2018) explicitly preserves covariance information to enable better augmentation. Our work investigates critical yet unexplored properties in this paradigm that the data hallucinator should satisfy. These properties are general and can be flexibly incorporated into existing meta-learning approaches and hallucination methods, providing significant gains irrespective of these choices." }, { "heading": "3 META-LEARNING WITH HALLUCINATION", "text": "We begin by presenting the general meta-learning mechanism (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017) and our meta-learning with hallucination framework for the task of\nfew-shot image classification. Let I be the space of images. We are given two disjoint sets of classes: a base class set Cbase and an unseen novel class set Cnovel. The corresponding base dataset Dbase = {(Ii, yi) , Ii ∈ I, yi ∈ Cbase} contains a large number of labeled examples per class, while the novel dataset Dnovel = {(Ii, yi) , Ii ∈ I, yi ∈ Cnovel} consists of only a small number n of labeled examples per class. The goal is to learn a classifier hclsθh parametrized by θh on Dbase that can cross-generalize (Bart & Ullman, 2005) to Cnovel even when n is as few as one. Meta-learning aims to achieve such generalization through episodic meta-training that explicitly mimics the few-shot learning scenario on Dbase (Vinyals et al., 2016). Specifically, in each episode of the meta-training stage, the meta-learner simulates a few-shot classification task out of Dbase. This task is constructed by first randomly sampling a subset of m classes from Cbase, and then randomly sampling a small “training” set Strain (also called the support set) and a small “test” set Stest (also called the query set). The learner, i.e., the classifier hclsθh , outputs estimated conditional probabilities p for each example (x, y) in Stest based on Strain. That is, p(x) = hclsθh (x, Strain). The meta-learner back-propagates the gradient of the total classification loss `cls = ∑ (x,y)∈Stest loss(h cls θh (x, Strain), y) in Stest to update the learner parameters θh. During the meta-testing stage, the resulting hclsθh is used to address the few-shot classification task on Dnovel, which predicts class probabilities of unlabeled test examples conditioned on the given small labeled training set Strain of Cnovel. Our meta-learning with hallucination framework introduces an additional “hallucinator” module GθG with parameters θG to augment the small training set Strain. To facilitate training, we follow recent work (Hariharan & Girshick, 2017; Wang et al., 2018) and first pre-train a deep convolutional network on Dbase using a standard cross-entropy loss. We use it to extract the feature representation x ∈ X for an input image I. Meta-learning is then performed over the pre-trained features {xi}. As shown in the shaded region in Figure 2, given an initial Strain, the hallucinator GθG generates additional examples for each class. Our framework applies to various types of hallucinators, and here we consider a powerful GAN-like hallucinator in Wang et al. (2018). Each hallucinated example is of the form (GθG(x, z), y), where (x, y) is a sampled seed example from Strain, and z is a sampled noise vector. The set of generated examples SGtrain is added to Strain to create an augmented training set Saugtrain. In the next section, we show how to meta-train GθG on Cbase, so that it can hallucinate new examples to augment Strain of Cnovel during meta-testing." }, { "heading": "4 PRECISE COLLABORATIVE HALLUCINATOR", "text": "We now present our PrecisE Collaborative hAlluciNator (PECAN) shown in Figure 2, which exploits two important criteria for useful hallucination: precision and collaboration. As important constraints and guidance, these criteria facilitate hallucination to improve the classification performance.\nBasic hallucinator with hard precision. At first, a precise hallucinator indicates that a classifier trained on Saugtrain should produce superior validation accuracy. We achieve this by training the hallucinator end-to-end with the learner (Wang et al., 2018). As shown in the shaded region in Figure 2, during each episode of meta-training, the learner module hclsθh uses S aug train to produce conditional probabilities hclsθh (x, S aug train) for each example (x, y) in the test set Stest. The meta-learner then back-\npropagates the gradient of the total classification loss `cls = ∑\n(x,y)∈Stest loss(h cls θh (x, Saugtrain), y) to update both the learner parameters θh and the hallucinator parameters θG.\nSoft precision-inducing hallucinator. One of the important characteristics of an optimal generative model is that the generated examples should be indistinguishable from real ones (Goodfellow et al., 2014). We argue that, in terms of our recognition task oriented hallucinator, this means that the classifier trained on hallucinated examples needs to be similar to the classifier trained on real examples. As shown in Figure 2, given an initial relatively large training set S∗train, which contains n\n∗ examples for each of the m classes, we randomly sample n (n n∗) examples per class, and obtain a subset Strain. From Strain, the hallucinator GθG generates n\n∗ examples per class as SGtrain. This produces two training sets: S∗train with real examples and S G train with hallucinated examples, where both contain the same number of examples. Importantly, note that SGtrain is hallucinated from the subset Strain instead of the initial large set S∗train, and because n n∗, we rule out the trivial identity hallucinator or memorization. We train two additional classification networks: hreal based on S∗train and hG based on SGtrain, both of which have the same architecture as h\ncls. When evaluated on the same test set Stest composed of real examples, a comparable performance between hreal and hG shows that the hallucinated samples are sufficiently precise, and as diverse as the real training set. Otherwise, when the hallucinator is imperfect, the accuracy of hG will be lower than that of hreal.\nThis similarity of classification accuracy essentially measures the difference between the learned (i.e., hallucinated) and the target (i.e., real) distributions, which could serve as an additional supervisory signal for training a better hallucinator. Since quantifying the similarity of accuracy directly would be difficult (Hinton et al., 2015), we instead introduce a loss function that acts on the network predictions. For an example (x, y) in Stest, the two networks produce conditional probabilities\npreal(x) = hrealθh (x, S ∗ train) and p G(x) = hGθh(x, S G train), (1)\nrespectively. While only the largest entry in preal(x) or pG(x) is used to make predictions associated with the ground-truth label y, other entries still carry rich information about the recognition task and the network, as observed in (Hinton et al., 2015; Dvornik et al., 2019). We thus leverage the probabilities p̂real and p̂G in the absence of the ground-truth label and measure their similarity using the negative cosine distance:\nψ(p̂real, p̂G) = − cos(p̂real, p̂G), (2) where p̂real and p̂G are obtained by removing the logit for y in preal and pG, and re-normalizing the remaining logits using softmax with a learnable temperature. We treat the classification networks hcls, hreal, and hG as the new learner h and use shared parameters for them. Their difference thus lies in different conditional training sets. We obtain the soft precision-inducing loss `pre by summing the loss (2) in Stest and then combine it with the hard precision (i.e., the classification) loss as the classification objective. pG is now encouraged to not only make the right prediction according to the ground-truth label, but also make similar second-best, third-best, etc., choice predictions as preal.\nCollaboration between hallucinator and learner. We now consider the interaction between the hallucinator G and the learner h. While hallucination is conducted in the pre-trained feature space X , the final classification is performed in a new embedding space Φ learned by the learner. Since the classification objective is directly imposed on the learner h, the hallucinator G continues to rely on the back-propagated signal to update its parameters. We may end up with a good embedding space Φ but a poor hallucinator G in the original space X . This undesired effect implies a potential imbalance between the hallucinator and the learner — a stronger learner that is able to make allowances for errors in hallucination, but a “lazy” hallucinator that does not make its best effort to capture the data distributions. Indeed, as is empirically validated in the experimental section (see Figure 3), despite being able to match the class distributions in the embedding space Φ, the hallucinated examples are initially pulled away from the class distributions in the feature space X . To mitigate this issue, we introduce a simple collaborative objective to the hallucinator, which provides an additional constraint or regularization on the hallucination process. This collaborative\nobjective is the same as the above classification objective (i.e., a combination of the classification loss and the precision-inducing loss), but enforces direct and early supervision for the hallucinator in the pre-trained feature space X . By doing so, we directly influence the update process of the hallucinator parameters, and generate much more discriminative examples right after hallucination than would be the case if we had to rely on gradual back-propagation from the learner alone. Our objective thus strengthens the cooperation between the hallucinator and the learner for the final classification performance, which can be viewed as a source of deep supervision that introduces auxiliary losses to intermediate layers when training deep neural networks (Simonyan & Zisserman, 2015; Lee et al., 2015). The overall objective combines the classification objective Llearner (on the learner) and the collaborative objective Lhal (on the hallucinator), each of which consists of a classification loss `cls (hard precision as cross-entropy with respect to ground-truth) and a soft precision-inducing loss `pre:\nL(θG, θh) = Llearner + λLhal = `clslearner + λ1` pre learner + λ2` cls hal + λ3` pre hal , (3)\nwhere λ, λ1, λ2, and λ3 are scalar hyper-parameters.\nOur hallucinator is general and applies to different types of h (i.e., meta-learning algorithms). Here we focus on the widely used and powerful prototypical networks (PN) (Snell et al., 2017), prototype matching networks (PMN) (Vinyals et al., 2016; Wang et al., 2018), and cosine classifiers (Cos) (Gidaris & Komodakis, 2018; Chen et al., 2019a). Without loss of generality, we take PN as an example to explain the overall meta-training and meta-testing process. PN learns an embedding space Φ and uses a non-parametric nearest centroid classifier to assign class probabilities for a test example based on its distances from class means in Φ. As before, in each meta-training episode, after sampling S∗train, Strain, and Stest and hallucinating S G train in the pre-trained feature space X , we perform nearest centroid classification and produce the collaborative objective Lhal on Stest. We then feed the examples to the PN learner, obtain their embedded features in Φ, perform nearest centroid classification, and produce the classification objective Llearner on Stest. The final loss is back-propagated to update both the PN learner parameters θh and the hallucinator parameters θG. Figure 2 shows a schematic of the entire process. During meta-testing, we use the resulting GθG to hallucinate new examples to augment Strain of Cnovel, and we combine the predicted class probabilities in X and Φ as the final predictions." }, { "heading": "5 EXPERIMENTAL EVALUATION", "text": "We explore the use of our meta-learning with hallucination framework for few-shot visual classification tasks. We focus the evaluation on the ImageNet based few-shot benchmark (Hariharan & Girshick, 2017; Wang et al., 2018). This is one of the largest datasets by far used for few-shot classification and it captures more realistic scenarios than others based on handwritten characters (Lake et al., 2015) or low-resolution images (Vinyals et al., 2016). The benchmark divides the 1,000 ImageNet categories (Russakovsky et al., 2015) into 389 base classes Cbase, with thousands of training images per class, and 611 novel classes Cnovel, with a small number n of training images per class. Following Hariharan & Girshick (2017), we use Cbase to train a convolutional network (ConvNet) based feature extractor and to conduct meta-training. Meta-testing is performed on Cnovel, and the performance is evaluated on a held-out test set, i.e., the original validation set of ImageNet. In addition, to avoid over-fitting, both Cbase and Cnovel are further split into two disjoint subsets. 193 of the base classes Ccvbase and 300 of the novel classes Ccvnovel are used for cross-validating hyper-parameters, and the remaining 196 base classes Cfinbase and 311 novel classes Cfinnovel are used for the final evaluation. Here we focus on hallucinating novel instances and thus evaluate the performance primarily on the novel classes Cfinnovel, which is also consistent with most of the contemporary work (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017). We report the mean top-1 and top-5 accuracies for 311-way, n = 1, 2, 5, 10, 20-shot classification, with each of them averaged over 5 trials.\nIn addition to this challenging version of ImageNet, we also evaluate on the widely used miniImageNet (Vinyals et al., 2016) dataset to show the generality of our approach. miniImageNet is a subset of 100 classes selected randomly from ImageNet with 600 images sampled from each class. Following the data split in Ravi & Larochelle (2017), we use 64 base, 16 validation, and 20 novel classes. We evaluate in the standard 5-way, 1-shot and 5-way, 5-shot settings (Vinyals et al., 2016)." }, { "heading": "5.1 RESULTS ON IMAGENET", "text": "Implementation details. We mainly use a ResNet-10 architecture (He et al., 2016) as the feature extractor, following Hariharan & Girshick (2017); Wang et al. (2018). Additionally, we provide results using a deeper ResNet-50 architecture in Section A.3. We extract and record the features, and perform meta-learning by using these pre-computed features. We consider three widely-used, powerful meta-learning approaches: prototypical networks (PN) (Snell et al., 2017), prototype matching networks (PMN) (Vinyals et al., 2016; Wang et al., 2018), and cosine classifiers (Cos-Cls) used in Gidaris & Komodakis (2018); Chen et al. (2019a). More implementation details are included in Section A.1.\nBaselines. First we compare with the state-of-the-art meta-learning with hallucination method (Wang et al., 2018), which is a special case of our approach learned with only the hard precision loss. While Wang et al. (2018) focused on ‘PN w/ G’ and ‘PMN w/ G’, here we consider an additional type of classier with hallucination, ‘Cos-Cls w/ G’, to show the generality of our work. In addition, we compare with a variety of baselines, including (1) these meta-learning approaches with standard data augmentation techniques (Chen et al., 2019a); (2) data hallucination approaches which are not meta-learned: logistic regression with analogies hallucination (Hariharan & Girshick, 2017) and Gaussian hallucinator (Wang et al., 2018); (3) other recent meta-learning approaches: matching networks (MN) (Vinyals et al., 2016), model-agnostic meta-learning (MAML) (Finn et al., 2017), and ‘cosine classifier & attentive weight generators (Cos & Att)’ (Gidaris & Komodakis, 2018); (4) classical few-shot learning approaches: Siamese networks (SN) (Koch et al., 2015); and (5) simple baselines which are not meta-learned: logistic regression (Hariharan & Girshick, 2017). For fair comparison, all these baselines and our approach use the same pre-trained ConvNet backbone.\nComparisons with the state of the art. Table 1 shows that our PECAN consistently outperforms all the baselines by large margins across different scenarios. For this challenging 311-way classification, our improvements are of the order of 1% to 2%, while standard deviations for accuracy are of the order of 0.2%. For example, in the case of top-5 accuracy, our ‘PN w/ G + PECAN’ outperforms ‘PN w/ G’ by 1.5 points for n = 1 and 1.6 points for n = 20. Similar trends can be observed for ‘PMN w/\nG + PECAN’ and ‘Cos-Cls w/ G + PECAN’, and also in the top-1 accuracy regime. This indicates that our approach is general and can work with different meta-learners.\nAblation studies. We conduct a series of ablations to evaluate the contribution of each component and different design choices. We use the prototypical network (PN) here due to its fast training speed.\nVariants of PECAN. PECAN leverages two requirements for the meta-learned hallucinator: precision and collaboration. ‘`clslearner’ is the basic hallucinator with only the hard precision based on the classification loss. Table 2 shows that each requirement by itself yields performance superior to the basic hallucinator. The soft precision-inducing loss `pre consistently helps when combined with the hard precision `cls: ‘`clslearner + ` pre learner’ outperforms ‘` cls learner’ and ‘` cls hal + ` pre hal’ outperforms ‘` cls hal’. The collaboration objective integrates `learner and `hal to boost the performance: ‘`clslearner + ` cls hal’ outperforms ‘`clslearner’. Each component is thus essential and complementary to each other, enabling our full PECAN to outperform its variants.\nChoice of similarity measure in soft precision-inducing loss. Our precision-inducing loss measures the similarity between classifier predictions preal and pG. We used negative cosine distance between the probabilities p̂real and p̂G in the absence of ground-truth labels. Table 3 compares with other types of similarity: variant of negative cosine distance, cross-entropy as in knowledge distillation (Hinton et al., 2015), Jensen-Shannon divergence, and symmetric KL-divergence (Dvornik et al., 2019). Our similarity achieves the best performance, and removing the true-class probability consistently helps.\nImpact of collaborative objective. Our collaborative objective introduces additional direct and early supervision to train the hallucinator. Table 2 shows quantitatively its contribution to the overall accuracy. Here, we further qualitatively understand its impact though t-SNE visualizations (van der Maaten & Hinton, 2008) of the hallucinated examples for novel classes. For ease of analysis, we do not use the precision-inducing loss. Without the collaborative objective, despite being able to match the class distributions in the embedding space Φ (Figure 3b), the hallucinated examples are initially pulled away from the class distributions in the pre-trained feature space X (Figure 3a), indicating a “lazy” hallucinator. In contrast, the collaborative objective enforces the hallucinator to generate more discriminative examples right after hallucination (Figure 3c), leading to improved performance.\nQualitative visualizations. To better understand the hallucination process, Figure 4 shows some examples of classification results for our PECAN and the state-of-the-art baseline (Wang et al., 2018).\n5.2 RESULTS ON miniIMAGENET\nTo show the generality of our approach, we further evaluate on miniImageNet. We use a ResNet-10 architecture and focus on incorporating our hallucinator into the metric-learning-based meta-learning approach, prototype matching networks (PMN) (Wang et al., 2018), and the optimization-based meta-learning approach, MAML (Finn et al., 2017). For MAMl, in each meta-training episode, we sample a batch of few-shot classification tasks. For each of the tasks, we sample training sets S∗train and Strain, sample a test test Stest, hallucinate SGtrain, and obtain an augmented training set S aug train. In the MAML inner loop, for each task, conditioning on its Saugtrain (S G train, or S ∗ train), we adapt the parameters of hcls (hG, or hreal) using few gradient updates. For each task, the adapted hcls (hG, or hreal) is evaluated on the corresponding Stest. In the MAML outer loop, we average the classification objective on Stest across the batch of tasks. In a similar way, we compute the collaborative objective in the pre-trained feature space. The final loss is used to update the initial MAML model and the hallucinator. From Table 4, our PECAN significantly outperforms all these state-of-the-art competitors, including other hallucination based approaches such as MetaGAN (Zhang et al., 2018a), delta-encoder (Schwartz et al., 2018), IDeMe-Net (Chen et al., 2019b), and SalNet (Zhang et al., 2019). Our superior performance over MetaGAN, a GAN-based approach to hallucinate data, shows that directly matching the classification performance is more desirable than matching the data distribution between hallucinated and real examples for recognition tasks. Our generic framework can be combined with more recent meta-learning methods, such as LEO (Rusu et al., 2019) and MetaOptNet-SVM (Lee et al., 2019), for further improvement." }, { "heading": "6 CONCLUSION", "text": "We have presented an approach to few-shot classification that uses a precise collaborative hallucinator to generate additional examples. Our hallucinator integrates two important requirements that facilitate data hallucination in a way that most improves the classification performance, and is trained end-toend through meta-learning. The extensive experiments demonstrate our state-of-the-art performance on the challenging ImageNet and miniImageNet based few-shot benchmark in various scenarios." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ADDITIONAL IMPLEMENTATION DETAILS", "text": "The embedding architecture of PN is composed of two-layer MLPs with leaky ReLU nonlinearity of slope 0.01, and a Euclidean distance similar to Snell et al. (2017). The embedding architecture of PMN is composed of a one-layer bi-directional LSTM and attention LSTM, and a cosine distance as in (Vinyals et al., 2016; Wang et al., 2018). The embedding architecture of Cos-Cls is composed of two-layer MLPs with leaky ReLU nonlinearity of slope 0.01 and an additional one-layer MLP without nonlinearity, and a cosine distance with a learnable temperature similar to Gidaris & Komodakis (2018). The initial value of the temperature is 100. The classification weight vector is estimated by averaging the feature vectors of the training examples for each class. The hallucinator G is a three-layer MLP with leaky ReLU nonlinearity of slope 0.01, with its parameters initialized to block diagonal identity matrices (Wang et al., 2018). The dimensionality of the hidden layers is 512 for ResNet-10 features and 2,048 for ResNet-50 features.\nBoth the baselines (PN/PMN/Cos-Cls and PN/PMN/Cos-Cls with hallucination) and our approach are meta-trained on ImageNet for 60,000 episodes by SGD with an initial learning rate 0.05 for PN/PMN and 0.005 for Cos-Cls, decayed by a factor of 10 every 20,000 episodes. In each episode of the meta-training stage, we sample all the base classes following Wang et al. (2018), which found that it is advantageous to use more classes rather than fewer. We sample n∗=20 examples per class from the base dataset Dbase, leading to an initial training set S∗train. Consistent with Wang et al. (2018), to make a single hallucinator robust to different sample sizes, we randomly sample different sized examples per class from S∗train, from 1 to 15 examples per class, to obtain Strain. One random seed example is sampled from Strain, and fed into the hallucinator G with different noise vectors to generate 20 examples per class as SGtrain. Hallucinated examples are sampled from S G train and added to Strain until there are exactly 20 examples per class in S aug train. For the test set Stest, we have 5 random examples per class for prototypical networks (PN) (Snell et al., 2017) and cosine classifiers(Cos-Cls) (Gidaris & Komodakis, 2018), and 1 random example per class for prototype matching networks (PMN) (Vinyals et al., 2016; Wang et al., 2018).\nFor the soft precision-inducing loss, we use softmax with temperature to produce conditional class probabilities p̂real and p̂G. This temperature is a learnable parameter. It is shared between p̂real and p̂G, but is not shared between Llearner and Lhal.\nThe hyper-parameters obtained by cross-validation on ImageNet are: λ1=0.5, λ2=0.02, and λ3=0.1 for PN with ResNet-10; λ1=0.2, λ2=0.0001, and λ3=0.05 for PN with ResNet-50; λ1=0.4, λ2=2.0, and λ3=0.1 for PMN with ResNet-10; and λ1=0.5, λ2=0.000001, and λ3=0.005 for the cosine classifier with ResNet-10.\nDuring the meta-testing stage, following Wang et al. (2018) we use n=1, 2, 5, 10 or 20 examples per class from the novel dataset Dnovel, and then hallucinate a fixed number of additional examples for each novel class. By cross-validation, the number of hallucinated examples per class is set to 10 for PN, Cos-Cls and Cos-Cls w/ G + PECAN with ResNet-10, 8 for Cos-Cls w/ G with ResNet-10, 5 for PN with ResNet-50, and 20 for PMN with ResNet-10. We combine the classifier prediction results in the pre-trained feature space and the learned embedding space using a scalar hyper-parameter. By cross-validation, this hyper-parameter is set to 0.05 for PN with ResNet-10, 0.07 for PN with ResNet-50, 1 for PMN with ResNet-10, and 0.00002 for the cosine classifier with ResNet-10. For the test set Stest, following Wang et al. (2018) we have 50 real examples per class, and we average the top-1 or top-5 accuracy of them over the novel classes." }, { "heading": "A.2 PERFORMANCE ON BASE CLASSES", "text": "While we significantly improve the classification performance on novel classes, our approach remains accurate on base classes. For example, for prototypical networks (PN) (Snell et al., 2017), both our ‘PN w/ G + PECAN’ and the baseline ‘PN w/ G’ achieve the same top-5 accuracy 92.4% on ImageNet.\nA.3 IMPACT OF DEEPER REPRESENTATION MODELS\nFigure A.1 shows the results on ImageNet using features from a ResNet-50 architecture. As expected, deeper networks result in better performance for all the approaches, but our PECAN hallucination\nstrategy still provides large gains across the board over the state-of-the-art meta-learned hallucinator in (Wang et al., 2018)." }, { "heading": "A.4 ANALYSIS OF HYPER-PARAMETER SENSITIVITY", "text": "Hyper-parameters in the overall objective. We conduct sensitivity experiments for the hyperparameters λ1, λ2, and λ3, which trade off different loss components in the overall objective of our PECAN. We vary one of the three hyper-parameters while fixing the remaining two to their cross-validated values. Figures A.2a, A.2b, and A.2c show the top-5 accuracy of ‘PN w/ G + PECAN’ on the novel classes for the ImageNet based n-shot classification benchmark. We can see that the top-5 accuracy is stable over a wide range of hyper-parameter values, for example when the value of λ1 becomes 50 times larger or 100 times smaller than λ1 used in the main paper. Across the board, our PECAN consistently and significantly outperforms the baselines shown in the main paper.\nNumber of hallucinated examples. We also show how the top-5 accuracy changes for n = 1-shot classification with respect to the number of hallucinated images in Figure A.2d. We can see that when the number of hallucinated examples is changed from 0 to 10, the performance of our PECAN gradually improves, and then saturates and drops slightly with more than 10 images generated." }, { "heading": "A.5 ADDITIONAL ANALYSIS OF SOFT PRECISION-INDUCING LOSS", "text": "Our soft precision-inducing loss measures the similarity between classifier predictions preal and pG. This is a general similarity measure which applies to various types of classifiers, including parametric and non-parametric classifiers. For prototypical networks (PN) (Snell et al., 2017), a non-parametric nearest centroid classifier is used to assign class probabilities for a test example based on its distances from class means. Hence, in this special case, to measure the similarity between classifier predictions, we can directly calculate the distance between the mean of real examples mreal and the mean of hallucinated examples mG. Table A.1 compares our generic similarity with this specific similarity for PN on the ImageNet based n-shot classification benchmark. The result shows that our similarity generalizes significantly better for novel classes." }, { "heading": "A.6 ADDITIONAL VISUALIZATIONS OF CLASSIFICATION RESULTS", "text": "Similar to Figure 5 in the main paper, here we provide more examples of classification results for our PECAN and the state-of-the-art meta-learned hallucinator (Wang et al., 2018) in Figure A.3." }, { "heading": "A.7 EXTENSION TO FEW-SHOT REGRESSION", "text": "In the main paper, we focus on few-shot classification tasks. However, our approach is general and applies to few-shot regression tasks as well. When extending our approach to address regression tasks, we need to slightly modify the design of the hallucinator G. Specifically, in classification tasks, the hallucinator G takes as input a seed example (x, y) and outputs a hallucinated example (x′, y′) = (G(x, z), y), where the class label y′ = y, indicating that the hallucinated example belongs to the same class as the seed example. In contrast, in regression tasks, y becomes a continuous quantity. We thus cannot enforce the constraint y′ = y. To address this issue, we modify the hallucinator G to directly generate the tuple (x′, y′) = G (x, y, z). This is achieved by the concatenation of x and y as input to G.\nWe incorporate our hallucinator with MAML (Finn et al., 2017) and evaluate on the n = 5-shot sinusoidal regression task proposed in Finn et al. (2017). Each task regresses from the input to the output of a sine curve, and different tasks differ in the amplitude and phase of the sinusoid. The amplitude and the phase are uniformly distributed within [0.1, 5.0] and [0, π], respectively. During meta-training and meta-testing, the input datapoints x of each task are uniformly sampled from [−5.0, 5.0]. The prediction loss is measured by the mean squared error (MSE) between the prediction and true value. The regressor is a feedforward neural network with 2 hidden layers of size 40 with ReLU nonlinearity. We use MAML (Finn et al., 2017) with one gradient update based on n = 5 examples with a fixed step size α = 0.01, and use Adam as the meta-optimizer (Kingma & Welling, 2014).\nWe compare with two baselines: (1) standard MAML, and (2) ‘MAML w/ G’, which is MAML with the hallucinator in (Wang et al., 2018). While Wang et al. (2018) focus on classification tasks, here we extend the use of its hallucinator in the similar way as we discussed before. Table A.2 shows that our PECAN consistently outperforms the baselines for the regression task as well, indicating the generality of our approach.\nA.8 VISUALIZATIONS OF HALLUCINATED EXAMPLES\nOur hallucination is performed in the pre-trained feature space, and visualizing them directly is not intuitive. In addition to t-SNE visualization of hallucinated samples in Figure 3, here in Figure A.4 we include an additional visualization of hallucinated examples in the pixel space, using the nearest neighbor real image in the feature space, corresponding to each hallucinated feature sample." } ]
2,019
null
SP:28a2ee0012e23223b2c3501a94a5e72e0c718c66
[ "The authors propose to use dynamic convolutional kernels as a means to reduce the computation cost in static CNNs while maintaining their performance. The dynamic kernels are obtained by a linear combination of static kernels where the weights of the linear combination are input-dependent (they are obtained similarly to the coefficients in squeeze-and-excite). The authors also include a theoretical and experimental study of the correlation.", "This paper proposed dynamic convolution (DyNet) to accelerating convolution networks. The new method is tested on the ImageNet dataset with three different backbones. It reduces the computation flops by a large margin while keeps similar classification accuracy. The additional segmentation experiment on the Cityscapes dataset also shows the new module can save computation a lot while maintaining similar segmentation accuracy." ]
Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels. To address this issue, we propose a novel dynamic convolution method named DyNet in this paper, which can adaptively generate convolution kernels based on image contents. To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs. The experiment results show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged. Specifically, for ShuffleNetV2 (1.0), MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 37.0%, 54.7%, 67.2% and 71.3% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, −0.27%, −0.6% and −0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87×, 1.32× and 1.48× on CPU platform respectively. To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task.
[]
[ { "authors": [ "Jimmy Ba", "Rich Caruana" ], "title": "Do deep nets really need to be deep? In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Wenlin Chen", "James Wilson", "Stephen Tyree", "Kilian Weinberger", "Yixin Chen" ], "title": "Compressing neural networks with the hashing trick", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jun Fu", "Jing Liu", "Haijie Tian", "Zhiwei Fang", "Hanqing Lu" ], "title": "Dual attention network for scene segmentation", "venue": null, "year": 2018 }, { "authors": [ "Jingjing Gong", "Xipeng Qiu", "Xinchi Chen", "Dong Liang", "Xuanjing Huang" ], "title": "Convolutional interaction network for natural language inference", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Jian Sun" ], "title": "Convolutional neural networks at constrained time cost", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "arXiv preprint arXiv:1405.3866,", "year": 2014 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc V Gool" ], "title": "Dynamic filter networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "arXiv preprint arXiv:1408.5093,", "year": 2014 }, { "authors": [ "Benjamin Klein", "Lior Wolf", "Yehuda Afek" ], "title": "A dynamic convolutional layer for short range weather prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Vadim Lebedev", "Yaroslav Ganin", "Maksim Rakhuba", "Ivan Oseledets", "Victor Lempitsky" ], "title": "Speeding-up convolutional neural networks using fine-tuned cp-decomposition", "venue": "arXiv preprint arXiv:1412.6553,", "year": 2014 }, { "authors": [ "Zechun Liu", "Haoyuan Mu", "Xiangyu Zhang", "Zichao Guo", "Xin Yang", "Tim Kwang-Ting Cheng", "Jian Sun" ], "title": "Metapruning: Meta learning for automatic neural network channel pruning", "venue": null, "year": 1903 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "Fitnets: Hints for thin deep nets", "venue": "arXiv preprint arXiv:1412.6550,", "year": 2014 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Dinghan Shen", "Martin Renqiang Min", "Yitong Li", "Lawrence Carin" ], "title": "Learning context-sensitive convolutional filters for text processing", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Ke Sun", "Mingjie Li", "Dong Liu", "Jingdong Wang" ], "title": "Igcv3: Interleaved low-rank group convolutions for efficient deep neural networks", "venue": "arXiv preprint arXiv:1806.00178,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Alexander Toshev", "Dumitru Erhan" ], "title": "Deep neural networks for object detection", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Guotian Xie", "Jingdong Wang", "Ting Zhang", "Jianhuang Lai", "Richang Hong", "Guo-Jun Qi" ], "title": "Interleaved structured sparse convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "Soft conditional computation", "venue": "arXiv preprint arXiv:1904.04971,", "year": 2019 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhao Zhong", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Practical block-wise neural network architecture generation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhao Zhong", "Zichen Yang", "Boyang Deng", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Blockqnn: Efficient block-wise neural network architecture generation", "venue": "arXiv preprint arXiv:1808.05584,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks (Krizhevsky et al., 2012; Szegedy et al., 2013), and the neural architectures of CNNs are evolving over the years (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Hu et al., 2018; Zhong et al., 2018a;b). However, modern high-performance CNNs often require a lot of computation resources to execute large amount of convolution kernel operations. Aside from the accuracy, to make CNNs applicable on mobile devices, building lightweight and efficient deep models has attracting much more attention recently (Howard et al., 2017; Sandler et al., 2018; Zhang et al., 2018; Ma et al., 2018). These methods can be roughly categorized into two types: efficient network design and model compression. Representative methods for the former category are MobileNet (Howard et al., 2017; Sandler et al., 2018) and ShuffleNet (Ma et al., 2018; Zhang et al., 2018), which use depth-wise separable convolution and channel-level shuffle techniques to reduce computation cost. On the other hand, model compression based methods tend to obtain a smaller network by compressing a larger network via pruning, factorization or mimic (Chen et al., 2015; Han et al., 2015a; Jaderberg et al., 2014; Lebedev et al., 2014; Ba & Caruana, 2014).\nAlthough some handcrafted efficient network structures have been designed, we observe that the significant correlations still exist among convolutional kernels, and introduce large amount of redundant calculations. Moreover, these small networks are hard to compress. For example, Liu et al. (2019) compress MobileNetV2 to 124M, but the accuracy drops by 5.4% on ImageNet. We theoretically analyze above observation, and find that this phenomenon is caused by the nature of static convolution, where correlated kernels are cooperated to extract noise-irrelevant features. Thus it is hard to compress the fixed convolution kernels without information loss. We also find that if we linearly fuse several convolution kernels to generate one dynamic kernel based on the input, we can obtain the noise-irrelevant features without the cooperation of multiple kernels, and further reduce the computation cost of convolution layer remarkably.\nBased on above observations and analysis, in this paper, we propose a novel dynamic convolution method named DyNet. The overall framework of DyNet is shown in Figure 1, which consists of a coefficient prediction module and a dynamic generation module. The coefficient prediction module is trainable and designed to predict the coefficients of fixed convolution kernels. Then the dynamic generation module further generates a dynamic kernel based on the predicted coefficients.\nOur proposed dynamic convolution method is simple to implement, and can be used as a drop-in plugin for any convolution layer to reduce computation cost. We evaluate the proposed DyNet on state-of-the-art networks such as MobileNetV2, ShuffleNetV2 and ResNets. Experiment results show that DyNet reduces 37.0% FLOPs of ShuffleNetV2 (1.0) while further improve the Top-1 accuracy on ImageNet by 1.0%. For MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 54.7%, 67.2% and 71.3% FLOPs respectively, the Top-1 accuracy on ImageNet changes by −0.27%, −0.6% and −0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87×,1.32×and 1.48× on CPU platform respectively." }, { "heading": "2 RELATED WORK", "text": "We review related works from three aspects: efficient convolution neural network design, model compression and dynamic convolutional kernels." }, { "heading": "2.1 EFFICIENT CONVOLUTION NEURAL NETWORK DESIGN", "text": "In many computer vision tasks (Krizhevsky et al., 2012; Szegedy et al., 2013), model design plays a key role. The increasing demands of high quality networks on mobile/embedding devices have driven the study on efficient network design (He & Sun, 2015). For example, GoogleNet (Szegedy et al., 2015) increases the depth of networks with lower complexity compared to simply stacking convolution layers; SqueezeNet (Iandola et al., 2016) deploys a bottleneck approach to design a very small network; Xception (Chollet, 2017), MobileNet (Howard et al., 2017) and MobileNetV2 (Sandler et al., 2018) use depth-wise separable convolution to reduce computation and model size. ShuffleNet (Zhang et al., 2018) and ShuffleNetV2 (Ma et al., 2018) shuffle channels to reduce computation of 1× 1 convolution kernel and improve accuracy. Despite the progress made by these efforts, we find that there still exists redundancy between convolution kernels and cause redundant computation." }, { "heading": "2.2 MODEL COMPRESSION", "text": "Another trend to obtaining small network is model compression. Factorization based methods (Jaderberg et al., 2014; Lebedev et al., 2014) try to speed up convolution operation by using tensor decomposition to approximate original convolution operation. Knowledge distillation based methods (Ba & Caruana, 2014; Romero et al., 2014; Hinton et al., 2015) learn a small network to mimic a larger teacher network. Pruning based methods (Han et al., 2015a;b; Wen et al., 2016; Liu et al., 2019)\ntry to reduce computation by pruning the redundant connections or convolution channels. Compared with those methods, DyNet is more effective especially when the target network is already efficient enough. For example, in (Liu et al., 2019), they get a smaller model of 124M FLOPs by pruning the MobileNetV2, however it drops the accuracy by 5.4% on ImageNet compared with the model with 291M FLOPs. While in DyNet, we can reduce the FLOPs of MobileNetV2 (1.0) from 298M to 129M with the accuracy drops only 0.27%." }, { "heading": "2.3 DYNAMIC CONVOLUTION KERNEL", "text": "Generating dynamic convolution kernels appears in both computer vision and natural language processing (NLP) tasks.\nIn computer vision domain, Klein et al. (Klein et al., 2015) and Brabandere et al. (Jia et al., 2016) directly generate convolution kernels via a linear layer based on the feature maps of previous layers. Because convolution kernels has a large amount of parameters, the linear layer will be inefficient on the hardware. Our proposed method solves this problem via merely predicting the coefficients for linearly combining static kernels and achieve real speed up for CNN on hardware. The idea of linearly combining static kernels using predicted coefficients has been proposed by Yang et al. (Yang et al., 2019), but they focus on using more parameters to make models more expressive while we focus on reducing redundant calculations in convolution. We make theoretical analysis and conduct correlation experiment to prove that correlations among convolutional kernels can be reduced via dynamically fusing several kernels.\nIn NLP domain, some works (Shen et al., 2018; Wu et al., 2019; Gong et al., 2018) incorporate context information to generate input-aware convolution filters which can be changed according to input sentences with various lengths. These methods also directly generate convolution kernels via a linear layer, etc. Because the size of CNN in NLP is smaller and the dimension of convolution kernel is one, the inefficiency issue for the linear layer is alleviated. Moreover, Wu et al. (Wu et al., 2019) alleviate this issue utilizing the depthwise convolution and the strategy of sharing weight across layers. These methods are designed to improve the adaptivity and flexibility of language modeling, while our method aims to cut down the redundant computation cost." }, { "heading": "3 DYNET: DYNAMIC CONVOLUTION IN CNNS", "text": "In this section, we first describe the motivation of DyNet. Then we explain the proposed dynamic convolution in detail. Finally, we illustrate the DyNet based architectures of our proposed Dy-mobile, Dy-shuffle, Dy-ResNet18, Dy-ResNet50." }, { "heading": "3.1 MOTIVATION", "text": "As illustrated in previous works (Han et al., 2015a;b; Wen et al., 2016; Liu et al., 2019), convolutional kernels are naturally correlated in deep models. For some of the well known networks, we plot the distribution of Pearson product-moment correlation coefficient between feature maps in Figure 2. Most existing works try to reduce correlations by compressing. However, efficient and small networks like MobileNets are harder to prune despite the correlation is still significant. We think these correlations are vital for maintaining the performance because they are cooperated to obtain noiseirrelevant features. We take face recognition as an example, where the pose or the illumination is not supposed to change the classification results. Therefore, the feature maps will gradually become noise-irrelevant when they go deeper. Based on the theoretical analysis in appendix A, we find that if we dynamically fuse several kernels, we can get noise-irrelevant feature without the cooperation of redundant kernels. In this paper, we propose dynamic convolution method, which learns the coefficients to fuse multiple kernels into a dynamic one based on image contents. We give more in depth analysis about our motivation in appendix A." }, { "heading": "3.2 DYNAMIC CONVOLUTION", "text": "The goal of dynamic convolution is to learn a group of kernel coefficients, which fuse multiple fixed kernels to a dynamic one. We demonstrate the overall framework of dynamic convolution in Figure 1. We first utilize a trainable coefficient prediction module to predict coefficients. Then we further propose a dynamic generation module to fuse fixed kernels to a dynamic one. We will illustrate the coefficient prediction module and dynamic generation module in detail in the following of this section.\nCoefficient prediction module Coefficient prediction module is proposed to predict coefficients based on image contents. As shown in Figure 3, the coefficient prediction module can be composed by a global average pooling layer and a fully connected layer with Sigmoid as activation function. Global average pooling layer aggregates the input feature maps into a 1 × 1 × Cin vector, which serves as a feature extraction layer. Then the fully connected layer further maps the feature into a 1× 1×C vector, which are the coefficients for fixed convolution kernels of several dynamic convolution layers.\nDynamic generation module For a dynamic convolution layer with weight [Cout × gt, Cin, k, k], it corresponds with Cout × gt fixed kernels and Cout dynamic kernels, the shape of each kernel is [Cin, k, k]. gt denotes the group size, it is a hyperparameter. We denote the fixed kernels as wit, the dynamic kernels as w̃t, the coefficients as η i t, where t = 0, ..., Cout, i = 0, ..., gt.\nAfter the coefficients are obtained, we generate dynamic kernels as follows:\nw̃t = gt∑ i=1 ηit · wit (1)\nTraining algorithm For the training of the proposed dynamic convolution, it is not suitable to use batch based training scheme. It is because the convolution kernel is different for different input images in the same mini-batch. Therefore, we fuse feature maps based on the coefficients rather than kernels during training. They are mathematically equivalent as shown in Eq. 2:\nÕt = w̃t ⊗ x = gt∑ i=1 (ηit · wit)⊗ x = gt∑ i=1 (ηit · wit ⊗ x)\n= gt∑ i=1 (ηit · (wit ⊗ x)) = gt∑ i=1 (ηit ·Oit), (2)\nwhere x denotes the input, Õt denotes the output of dynamic kernel w̃t, Oit denotes the output of fixed kernel wit." }, { "heading": "3.3 DYNAMIC CONVOLUTION NEURAL NETWORKS", "text": "We equip the MobileNetV2, ShuffleNetV2 and ResNets with our proposed dynamic convolution, and propose Dy-mobile, Dy-shuffle, Dy-ResNet18 and Dy-ResNet50 respectively. The building blocks of these 4 network are shown in Figure 4. Based on above dynamic convolution, each dynamic kernel can get noise-irrelevant feature without the cooperation of other kernels. Therefore we can reduce the channels for each layer of those base models and remain the performance. We set the hyper-parameter gt as 6 for all of them, and we give details of these dynamic CNNs below.\nDy-mobile In our proposed Dy-mobile, we replace the original MobileNetV2 block with our dymobile block, which is shown in Figure 4 (a). The input of coefficient prediction module is the input of block, it produces the coefficients for all three dynamic convolution layers. Moreover, we further make two adjustments:\n• We do not expand the channels in the middle layer like MobileNetV2. If we denote the output channels of the block as Cout, then the channels of all the three convolution layers will be Cout.\n• Since the depth-wise convolution is efficient, we set groups = Cout6 for the dynamic depthwise convolution. We will enlarge Cout to make it becomes the multiple of 6 if needed.\nAfter the aforementioned adjustments, the first dynamic convolution layer reduce the FLOPs from 6C2HW to C2HW . The second dynamic convolution layer keep the FLOPs as 6CHW × 32 unchanged because we reduce the output channels by 6x while setting the groups of convolution 6x smaller, too. For the third dynamic convolution layer, we reduce the FLOPs from 6C2HW to C2HW as well. The ratio of FLOPs for the original block and our dy-mobile block is:\n6C2HW + 6CHW × 32 + 6C2HW C2HW + 6CHW × 32 + C2HW = 6C + 27 C + 27 = 6− 135 C + 27 (3)\nDy-shuffle In the original ShuffleNet V2, channel split operation will split feature maps to rightbranch and left-branch, the right branch will go through one pointwise convolution, one depthwise convolution and one pointwise convolution sequentially. We replace conventional convolution with dynamic convolution in the right branch as shown in Figure 4 (b). We feed the input of right branch into coefficient prediction module to produce the coefficients. In our dy-shuffle block, we split channels into left-branch and right-branch with ratio 3 : 1, thus we reduce the 75% computation cost for two dynamic pointwise convolutuon. Similar with dy-mobile, we adjust the parameter ”groups” in dynamic depthwise convolution to keep the FLOPs unchanged.\nDy-ResNet18/50 In Dy-ResNet18 and DyResNet50, we simple reduce half of the output channels for dynamic convolution layers of each residual block. Because the input channels of each block is large compared with dy-mobile and dy-shuffle, we use two linear layer as shown in Figure 4 (c) and Figure 4 (d) to reduce the amount of parameters. If the input channel is Cin, the output channels of the first linear layer will be Cin4 for Dy-ResNet18/50." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "For the training of the proposed dynamic neural networks. Each image has data augmentation of randomly cropping and flipping, and is optimized with SGD strategy with cosine learning rate decay. We set batch size, initial learning rate, weight decay and momentum as 2048, 0.8, 5e-5 and 0.9 respectively. We also use the label smoothing with rate 0.1. We evaluate the accuracy on the test images with center crop.\n4.2 EXPERIMENT SETTINGS AND COMPARED METHODS\nWe evaluate DyNet on ImageNet (Russakovsky et al., 2015), which contains 1.28 million training images and 50K validation images collected from 1000 different classes. We train the proposed networks on the training set and report the top-1 error on the validation set. To demonstrate the effectiveness, we compare the proposed dynamic convolution with state-of-the-art networks under mobile setting, including MobileNetV1 (Howard et al., 2017), MobileNetV2 (Sandler et al., 2018), ShuffleNet (Zhang et al., 2018), ShuffleNet V2 (Ma et al., 2018), Xception (Chollet, 2017), DenseNet (Huang et al., 2017), IGCV2 (Xie et al., 2018) and IGCV3 (Sun et al., 2018)." }, { "heading": "4.3 EXPERIMENT RESULTS AND ANALYSIS", "text": "Analysis of accuracy and computation cost We demonstrate the results in Table 1, where the number in the brackets indicates the channel number controller (Sandler et al., 2018). We partitioned the result table into three parts: (1) The proposed dynamic networks; (2) Compared state-of-the-art networks under mobile settings; (3) The original networks corresponding to the implemented dynamic networks.\nTable 1 provides several valuable observations: (1) Compared with these well known models under mobile setting, the proposed Dy-mobile and Dy-shuffle achieves the best classification error with lowest computation cost. This demonstrates that the proposed dynamic convolution is a simple yet effective way to reduce computation cost. (2) Compared with the corresponding basic neural structures, the proposed Dy-shuffle (1.0), Dy-mobile (1.0), Dy-ResNet18 and Dy-ResNet50 reduce 37.0%, 54.7%, 67.2% and 71.3% computation cost respectively with little drop on Top-1 accuracy. This shows that even though the proposed network significantly reduces the convolution computation cost, the generated dynamic kernel can still capture sufficient information from image contents. The results also indicate that the proposed dynamic convolution is a powerful plugin, which can be implemented on convolution layers to reduce computation cost while maintaining the accuracy.\nFurthermore, we conduct detailed experiments on MobileNetV2. We replace the conventional convolution with the proposed dynamic one and get Dy-MobileNetV2. The accuracy of classification for models with different number of channels are shown in Figure 5. It is observed that DyMobileNetV2 consistently outperforms MobileNetV2 but the ascendancy is weaken with the increase of number of channels.\nCorrelation value -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\nMobileNetV2 Dy-MobileNetV2\nSW N W M\nFigure 6: The correlation distribution of fixed kernel and the generated dynamic kernel, S, M, W, N denote strong, middle, weak and no correlation respectively. We can observe that compared with conventional fixed kernels, the generated dynamic kernels have small correlation values.\nAnalysis of the dynamic kernel Aside from the quantitative analysis, we also demonstrate the redundancy of the generated dynamic kernels compared with conventional fixed kernels in Figure 6. We calculate the correlation between each feature maps output by the second last stage for the original MobileNetV2(1.0) and Dy-MobileNetV2 (1.0). Note that Dy-MobileNetV2 (1.0) is different with Dy-mobile(1.0). Dy-MobileNetV2(1.0) keeps the channels of each layer the same as the original one, while replace the conventional convolution with dynamic convolution. As shown in Figure 6, we can observe that the correlation distribution of dynamic kernels have more values distributed between −0.1 and 0.2 compared with fixed convolution kernel, which indicates that the redundancy between dynamic convolution kernels are much smaller than the fixed convolution kernels.\nAnalysis of speed on the hardware We also analysis the inference speed of DyNet. We carry out experiments on the CPU platform (Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz) with Caffe (Jia et al., 2014). We set the size of input as 224 and report the average inference time of 50 iterations. It is reasonable to set mini-batch size as 1, which is consistent with most inference scenarios. The results are shown in Table 2. Moreover, the latency of fusing fixed kernels is independent with the input size, thus we expect to achieve bigger acceleration ratio when the input size of networks become larger. We conduct experiments to verify this assumption, the results are shown in Figure 7. We can observe that the ratio of reduced latency achieved by DyNet gets bigger as the input size becomes larger. As shown in (Tan & Le, 2019), a larger input size can make networks perform significantly better, thus DyNet is more effective in this scenario.\nWe also analysis the training speed on the GPU platform. The model is trained with 32 NVIDIA Tesla V100 GPUs and the batch size is 2048. We report the average training time of one iteration in Table 2. It is observed that the training speed of DyNet is slower, it is reasonable because we fuse feature maps rather than kernels according to Eq. 2 in the training stage." }, { "heading": "4.4 EXPERIMENTS ON SEGMENTATION", "text": "To verify the scalability of DyNet on other tasks, we conduct experiments on segmentation. Compared to the method Dilated FCN with ResNet50 as basenet (Fu et al., 2018), Dilated FCN with DyResNet50 reduces 69.3% FLOPs while maintaining the MIoU on Cityscapes validation set. The result are shown in Table 3." }, { "heading": "4.5 ABLATION STUDY", "text": "Comparison between dynamic convolution and static convolution We correspondingly design two networks without dynamic convolution. Specifically, we remove the correlation prediction module and use fixed convolution kernel for Dy-mobile (1.0) and Dy-shuffle (1.5), and we keep the channel number the same as the dynamic convolution neural networks. We denote the baseline networks as Fix-mobile(1.0) and Fix-shuffle (1.5) respectively. The results are shown in Table 4, compare with baseline networks Fix-mobile (1.0) and Fix-shuffle (1.5), the proposed Dy-mobile (1.0) and Dy-shuffle (1.5) achieve absolute classification improvements by 5.19% and 2.82% respectively. This shows that directly decreasing the channel number to reduce computation cost influences the classification performance a lot. While the proposed dynamic kernel can retain the representation ability as mush as possible.\nEffectiveness of gt for dynamic kernel The group size gt in Eq. 1 does not change the computation cost of dynamic convolution, but affects the performance of network. Thus we provide ablative study on gt. We set gt as 2,4,6 for dy-mobile(1.0) respectively and the results are shown in Table 5. The performance of dy-mobile(1.0) becomes better when gt gets larger. It is reasonable because larger gt means the number of kernels cooperated for obtaining one noise-irrelevant feature becomes larger. When gt = 1, the coefficient prediction module can be regarded as merely learning the attention for different channels, which can improve the performance of networks as well (Hu et al., 2018). Therefore we provide ablative study for comparing gt = 1 and gt = 6 on Dy-mobile(1.0) and Dy-ResNet18. The results are shown in Table 6. From the table we can see that, setting gt = 1 will reduce the Top-1 accuracy on ImageNet for Dy-mobile(1.0) and Dy-ResNet18 by 2.58% and 2.79% respectively. It proves that the improvement of our proposed dynamic networks does not only come from the attention mechanism." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a DyNet method to adaptively generate convolution kernels based on image content, which reduces the redundant computation cost existed in conventional fixed convolution kernels. Based on the proposed DyNet, we design several dynamic convolution neural networks based on well known architectures, i.e., Dy-mobile, Dy-shuffle, Dy-ResNet18, Dy-ResNet50. The experiment results show that DyNet reduces 37.0%, 54.7%, 67.2% and 71.3% FLOPs respectively, while maintaining the performance unchanged. As future work, we want to further explore the redundancy phenomenon existed in convolution kernels, and find other ways to reduce computation cost, such as dynamically aggregate different kernels for different images other than fixed groups used in this paper." }, { "heading": "A APPENDIX", "text": "A.1 DETAILED ANALYSIS OF OUR MOTIVATION\nWe illustrate our motivation from a convolution with output f(x), i.e.,\nf(x) = x⊗ w, (4)\nwhere ⊗ denotes the convolutional operator, x ∈ Rn is a vectorized input and w ∈ Rn means the filter. Specifically, the ith element of the convolution output f(x) is calculated as:\nfi(x) = 〈x(i), w〉, (5)\nwhere 〈·, ·〉 provides an inner product and x(i) is the circular shift of x by i elements. We define the index i started from 0. We denote the noises in x(i) as ∑d−1 j=0 αjyj , where αj ∈ R and {y0, y1, ..., yd−1} are the base vectors of noise space Ψ. Then the kernels in one convolutional layer can be represented as {w0, w1, ..., wc}. The space expanded by {w0, w1, ..., wc} is Ω. We can prove if the kernels are trained until Ψ ⊂ Ω, then for each wk /∈ Ψ, we can get the noise-irrelevant fi(xwhite) = 〈xwhite(i) , wk〉 by the cooperation of other kernels w0, w1, .... Firstly x(i) can be decomposed as:\nx(i) = x̄(i) + βwk + d−1∑ j=0 αjyj , (6)\nwhere β ∈ R and x̄ ∈ Rn is vertical to wk and yj . For concision we assume the norm of wk and yj is 1. Then,\nfi(x) = 〈x(i), wk〉 = 〈x̄(i) + βwk + d−1∑ j=0 αjyj , wk〉 = β〈wk, wk〉+ d−1∑ j=0 αj〈yj , wk〉 (7)\nWhen there is no noise, i.e. αj = 0 for j = 0, 1, ..., d− 1, the white output fi(xwhite) becomes:\nfi(x white) = 〈xwhite(i) , wk〉 = 〈x̄(i) + βwk, wk〉 = β〈wk, wk〉 = β. (8)\nIt is proved in the Appendix A.2 that:\nfi(x white) = 〈a00wk + ∑ t βtwt, x(i)〉 = (a00 + βk)〈wk, x(i)〉+ ∑ t 6=k βt〈wt, x(i)〉, (9)\nwhere β0, ..., βc is determined by the input image. Eq. 9 is fulfilled by linearly combine convolution output 〈wk, x(i)〉 and 〈wt, x(i)〉 for those βt 6= 0 in the following layers. Thus if there are N coefficients in Eq. 9 that are not 0, then we need to carry out N times convolution operation to get the noise-irrelevant output of kernel wt, this causes redundant calculation. In Eq. 9, we can observe that the computation cost can be reduced to one convolution operation by linearly fusing those kernels to a dynamic one:\nw̃ = (a00 + βk)wk + ∑\nt 6=k,βt 6=0\nβtwt\nfi(x white) = 〈w̃, x(i)〉.\n(10)\nIn Eq. 10, the coefficients β0, β1, ... is determined by α0, α1, ..., thus they should be generated based on the input of network. This is the motivation of our proposed dynamic convolution.\nA.2 PROVING OF EQ. 9\nWe denote gij(x) as 〈x(i), yj〉, j = 0, 1, ..., d− 1. Then,\ngij(x) = 〈x(i), yj〉 = 〈x̄(i) + βwk + d−1∑ t=0 αtyt, yj〉 = β〈wk, yj〉+ d−1∑ t=0 αt〈yt, yj〉. (11)\nBy summarize Eq. 7 and Eq. 11, we get the following equation: 〈wk, wk〉 〈y0, wk〉 〈y1, wk〉... 〈yd−1, wk〉 〈wk, y0〉 〈y0, y0〉 〈y1, y0〉 ... 〈yd−1, y0〉 〈wk, y1〉 〈y0, y1〉 〈y1, y1〉 ... 〈yd−1, y1〉\n... ...\n... ... ...\n〈wk, yd−1〉〈y0, yd−1〉 . . . ...〈yd−1, yd−1〉\n β α0 α1 ...\nαd−1\n = fi(x) gi0(x) gi1(x)\n... gi(d−1)(x) , (12) We simplify this equation as:\nA~x = ~b. (13) Because wk /∈ Ψ, we can denote wk as:\nwk = γ⊥w⊥ + d−1∑ j=0 γjyj , (14)\nwhere w⊥ is vertical to y0, ..., yd−1 and γ⊥ 6= 0. moreover because |wk| = 1 ,thus\n|γ⊥|2 + d−1∑ j=0 |γj |2 = 1. (15)\nIt can be easily proved that:\nA = 1 γ0 γ1 ...γd−1 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 . (16) thus,\n|A| = ∣∣∣∣∣∣∣∣∣∣ 1 γ0 γ1 ... γd−1 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 ∣∣∣∣∣∣∣∣∣∣ =\n∣∣∣∣∣∣∣∣∣∣ 1− γ20 0 γ1 ... γd−1 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 ∣∣∣∣∣∣∣∣∣∣ =\n∣∣∣∣∣∣∣∣∣∣ 1− γ20 − γ21 0 0 ... γd−1 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 ∣∣∣∣∣∣∣∣∣∣ =\n∣∣∣∣∣∣∣∣∣∣ 1− γ20 − γ21 − ...− γ2d−1 0 0 ... 0 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 ∣∣∣∣∣∣∣∣∣∣ =\n∣∣∣∣∣∣∣∣∣∣ γ2⊥ 0 0 ... 0 γ0 1 0 ... 0 γ1 0 1 ... 0 ... ... ... ...\n... γd−1 0 . . . ... 1 ∣∣∣∣∣∣∣∣∣∣ =γ2⊥ 6= 0.\n(17)\nthus, ~x = A−1~b. (18)\nIf we denote the elements of the first row of A−1 as a00, a01, ..., a0d, then\nfi(x white) = β = a00fi(x) + d−1∑ j=0 a0(j+1)gi,j(x)\n= a00〈wk, x(i)〉+ d−1∑ j=0 a0(j+1)〈yj , x(i)〉 = 〈a00wk + d−1∑ j=0 a0(j+1)yj , x(i)〉.\n(19)\nBecause Ψ ⊂ Ω, there exists {βt ∈ R|t = 0, 1, ..., c} that d−1∑ j=0 a0(j+1)yj = ∑ t βtwt. (20)\nThen,\nfi(x white) = 〈a00wk + ∑ t βtwt, x(i)〉 = (a00 + βk)〈wk, x(i)〉+ ∑ t 6=k βt〈wt, x(i)〉, (21)" } ]
2,019
null
SP:9e712c6f60b19d9309721eea514589755b4ce648
[ "The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. The star-convexity property is also shown to hold to some degree on real world datasets. The paper argues that these results explain the good performance that usual gradient descent procedures achieve in practice. The paper also puts forward a conjecture that more parameters make the loss function easier to optimize by making it more likely that star convexity holds, and that a similar conclusion could hold for DNNs.", "This paper studies loss landscape of Non-negative matrix factorization (NMF) when the matrix is very large. It shows that with high probability, the landscape is quasi-convex under some conditions. This suggests that the optimization problem would become easier as the size of the matrix becomes very large. Implications on deep networks are also discussed. " ]
Non-negative matrix factorization (NMF) is a highly celebrated algorithm for matrix decomposition that guarantees non-negative factors. The underlying optimization problem is computationally intractable, yet in practice gradient descent based solvers often find good solutions. This gap between computational hardness and practical success mirrors recent observations in deep learning, where it has been the focus of extensive discussion and analysis. In this paper we revisit the NMF optimization problem and analyze its loss landscape in non-worst-case settings. It has recently been observed that gradients in deep networks tend to point towards the final minimizer throughout the optimization. We show that a similar property holds (with high probability) for NMF, provably in a non-worst case model with a planted solution, and empirically across an extensive suite of real-world NMF problems. Our analysis predicts that this property becomes more likely with growing number of parameters, and experiments suggest that a similar trend might also hold for deep neural networks — turning increasing data sets and models into a blessing from an optimization perspective.
[]
[ { "authors": [ "P. Afshani", "J. Barbay", "T.M. Chan" ], "title": "Instance-optimal geometric algorithms", "venue": "Journal of the ACM (JACM),", "year": 2017 }, { "authors": [ "R. Ahlswede", "A. Winter" ], "title": "Strong converse for identification via quantum channels", "venue": "IEEE Transactions on Information Theory,", "year": 2002 }, { "authors": [ "N. Ampazis" ], "title": "Collaborative filtering via concept decomposition on the netflix dataset", "venue": "In ECAI,", "year": 2008 }, { "authors": [ "S. Arora", "R. Ge", "R. Kannan", "A. Moitra" ], "title": "Computing a nonnegative matrix factorization–provably", "venue": "In Proceedings of the forty-fourth annual ACM symposium on Theory of computing,", "year": 2012 }, { "authors": [ "S. Arora", "N. Cohen", "E. Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "arXiv preprint arXiv:1802.06509,", "year": 2018 }, { "authors": [ "B. Barak", "S.B. Hopkins", "J. Kelner", "P. Kothari", "A. Moitra", "A. Potechin" ], "title": "A nearly tight sum-ofsquares lower bound for the planted clique problem", "venue": "IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2016 }, { "authors": [ "O. Berne", "C. Joblin", "Y. Deville", "J. Smith", "M. Rapacioli", "J. Bernard", "J. Thomas", "W. Reach", "A. Abergel" ], "title": "Analysis of the emission of very small dust particles from spitzer spectro-imagery data using blind signal separation methods", "venue": "Astronomy & Astrophysics,", "year": 2007 }, { "authors": [ "Y. Bilu", "N. Linial" ], "title": "Are stable instances easy? Combinatorics", "venue": "Probability and Computing,", "year": 2012 }, { "authors": [ "A. Blum", "R.L. Rivest" ], "title": "Training a 3-node neural network is np-complete", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "D.D. Bourgin" ], "title": "Testing Models of Cognition at Scale", "venue": "PhD thesis, University of California,", "year": 2018 }, { "authors": [ "E. Candes", "J. Romberg", "T. Tao" ], "title": "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information", "venue": "arXiv preprint math/0409186,", "year": 2004 }, { "authors": [ "E.J. Candès", "B. Recht" ], "title": "Exact matrix completion via convex optimization", "venue": "Foundations of Computational mathematics,", "year": 2009 }, { "authors": [ "X. canto Foundation" ], "title": "xeno-canto: Sharing bird sounds from around the world", "venue": "https://www. xeno-canto.org", "year": 2019 }, { "authors": [ "A. Choromanska", "M. Henaff", "M. Mathieu", "G.B. Arous", "Y. LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "D. Davis" ], "title": "Mathematics of data science", "venue": "https://people.orie.cornell.edu/dsd95/ orie6340.html", "year": 2019 }, { "authors": [ "A. Decelle", "F. Krzakala", "C. Moore", "L. Zdeborová" ], "title": "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications", "venue": "Physical Review E,", "year": 2011 }, { "authors": [ "D. Donoho", "V. Stodden" ], "title": "When does non-negative matrix factorization give a correct decomposition into parts", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "A. Dvoredsky" ], "title": "Some results on convex bodies and banach spaces", "venue": null, "year": 1961 }, { "authors": [ "N.B. Erichson", "A. Mendible", "S. Wihlborn", "J.N. Kutz" ], "title": "Randomized nonnegative matrix factorization", "venue": "Pattern Recognition Letters,", "year": 2018 }, { "authors": [ "J. Flenner", "B. Hunter" ], "title": "A deep non-negative matrix factorization neural network. 2017", "venue": null, "year": 2017 }, { "authors": [ "J. Frankle", "M. Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "R. Ge", "F. Huang", "C. Jin", "Y. Yuan" ], "title": "Escaping from saddle points—online stochastic gradient for tensor decomposition", "venue": "In Conference on Learning Theory,", "year": 2015 }, { "authors": [ "N. Gillis" ], "title": "Sparse and unique nonnegative matrix factorization through data preprocessing", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "N. Gillis" ], "title": "The why and how of nonnegative matrix factorization. Regularization, Optimization, Kernels, and Support", "venue": "Vector Machines,", "year": 2014 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "A. Guionnet", "O. Zeitouni" ], "title": "Concentration of the spectral measure for large matrices", "venue": "Electronic Communications in Probability,", "year": 2000 }, { "authors": [ "F.M. Harper", "J.A. Konstan" ], "title": "The movielens datasets: History and context", "venue": "Acm transactions on interactive intelligent systems (tiis),", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "W. Hoeffding" ], "title": "Probability inequalities for sums of bounded random variables", "venue": "In The Collected Works of Wassily Hoeffding,", "year": 1994 }, { "authors": [ "P.W. Holland", "K.B. Laskey", "S. Leinhardt" ], "title": "Stochastic blockmodels: First steps", "venue": "Social networks,", "year": 1983 }, { "authors": [ "P.O. Hoyer" ], "title": "Non-negative matrix factorization with sparseness constraints", "venue": "Journal of machine learning research,", "year": 2004 }, { "authors": [ "P. Izmailov", "D. Podoprikhin", "T. Garipov", "D. Vetrov", "A.G. Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "N.S. Keskar", "D. Mudigere", "J. Nocedal", "M. Smelyanskiy", "P.T.P. Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "R. Kleinberg", "Y. Li", "Y. Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "arXiv preprint arXiv:1802.06175,", "year": 2018 }, { "authors": [ "Y. Koren", "R. Bell", "C. Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "A. Krizhevsky", "V. Nair", "G. Hinton" ], "title": "The cifar-10 dataset", "venue": "online: http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2014 }, { "authors": [ "M. Kula" ], "title": "Mixture-of-tastes models for representing users with diverse interests", "venue": "arXiv preprint arXiv:1711.08379,", "year": 2017 }, { "authors": [ "M. Ledoux" ], "title": "The concentration of measure phenomenon", "venue": "Number 89. American Mathematical Soc.,", "year": 2001 }, { "authors": [ "D.D. Lee", "H.S. Seung" ], "title": "Learning the parts of objects by non-negative matrix factorization", "venue": null, "year": 1999 }, { "authors": [ "D.D. Lee", "H.S. Seung" ], "title": "Algorithms for non-negative matrix factorization", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "J.C. Lee", "P. Valiant" ], "title": "Optimizing star-convex functions", "venue": "IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2016 }, { "authors": [ "H. Li", "Z. Xu", "G. Taylor", "C. Studer", "T. Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "L. Li", "G. Lebanon", "H. Park" ], "title": "Fast bregman divergence nmf using taylor expansion and coordinate descent", "venue": "In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2012 }, { "authors": [ "S.Z. Li", "X. Hou", "H. Zhang", "Q. Cheng" ], "title": "Learning spatially localized, parts-based representation", "venue": "CVPR (1),", "year": 2001 }, { "authors": [ "Y. Li", "A. Ngom" ], "title": "The non-negative matrix factorization toolbox for biological data mining", "venue": "Source code for biology and medicine,", "year": 2013 }, { "authors": [ "Y. Li", "Y. Yuan" ], "title": "Convergence analysis of two-layer neural networks with relu activation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "C. Lu", "X. Tang" ], "title": "Surpassing human-level face verification performance on lfw with gaussianface", "venue": "In Twenty-ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "H. Lu", "K. Kawaguchi" ], "title": "Depth creates no bad local minima", "venue": "arXiv preprint arXiv:1702.08580,", "year": 2017 }, { "authors": [ "X. Luo", "M. Zhou", "Y. Xia", "Q. Zhu" ], "title": "An efficient non-negative matrix-factorization-based approach to collaborative filtering for recommender systems", "venue": "IEEE Transactions on Industrial Informatics,", "year": 2014 }, { "authors": [ "Y. Mao", "L.K. Saul", "J.M. Smith" ], "title": "Ides: An internet distance estimation service for large networks", "venue": "IEEE Journal on Selected Areas in Communications,", "year": 2006 }, { "authors": [ "M. Meckes", "S. Szarek" ], "title": "Concentration for noncommutative polynomials in random matrices", "venue": "Proceedings of the American Mathematical Society,", "year": 2012 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G. Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "P.M. Pardalos", "S.A. Vavasis" ], "title": "Quadratic programming with one negative eigenvalue is np-hard", "venue": "Journal of Global Optimization,", "year": 1991 }, { "authors": [ "J. Pennington", "Y. Bahri" ], "title": "Geometry of neural network loss surfaces via random matrix theory", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "M.N. Schmidt", "J. Larsen", "F.-T. Hsiao" ], "title": "Wind noise reduction using non-negative sparse coding", "venue": "IEEE workshop on machine learning for signal processing,", "year": 2007 }, { "authors": [ "D.A. Spielman", "S.-H. Teng" ], "title": "Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time", "venue": "Journal of the ACM (JACM),", "year": 2004 }, { "authors": [ "S.K. Suram", "Y. Xue", "J. Bai", "R. Le Bras", "B. Rappazzo", "R. Bernstein", "J. Bjorck", "L. Zhou", "R.B. van Dover", "C.P. Gomes" ], "title": "Automated phase mapping with agilefd and its application to light absorber discovery in the v–mn–nb oxide system", "venue": "ACS combinatorial science,", "year": 2016 }, { "authors": [ "G.F. Trindade", "M.-L. Abel", "J.F. Watts" ], "title": "Non-negative matrix factorisation of large mass spectrometry", "venue": "datasets. Chemometrics and Intelligent Laboratory Systems,", "year": 2017 }, { "authors": [ "J.A. Tropp" ], "title": "User-friendly tail bounds for sums of random matrices", "venue": "Foundations of computational mathematics,", "year": 2012 }, { "authors": [ "S.A. Vavasis" ], "title": "On the complexity of nonnegative matrix factorization", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "R. Vershynin" ], "title": "Introduction to the non-asymptotic analysis of random matrices", "venue": "arXiv preprint arXiv:1011.3027,", "year": 2010 }, { "authors": [ "R. Vershynin" ], "title": "High-dimensional probability: An introduction with applications in data science, volume 47", "venue": null, "year": 2018 }, { "authors": [ "L. Xiao", "Y. Bahri", "J. Sohl-Dickstein", "S.S. Schoenholz", "J. Pennington" ], "title": "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks", "venue": "arXiv preprint arXiv:1806.05393,", "year": 2018 }, { "authors": [ "S. Zhang", "W. Wang", "J. Ford", "F. Makedon" ], "title": "Learning from incomplete ratings using non-negative matrix factorization", "venue": "In Proceedings of the 2006 SIAM international conference on data mining,", "year": 2006 }, { "authors": [ "Y. Zhou", "D. Wilkinson", "R. Schreiber", "R. Pan" ], "title": "Large-scale parallel collaborative filtering for the netflix prize", "venue": "In International conference on algorithmic applications in management,", "year": 2008 }, { "authors": [ "Y. Zhou", "J. Yang", "H. Zhang", "Y. Liang", "V. Tarokh" ], "title": "Sgd converges to global minimum in deep learning via star-convex", "venue": null, "year": 1901 }, { "authors": [ "G. Zhu" ], "title": "Nonnegative matrix factorization (nmf) with heteroscedastic uncertainties and missing data", "venue": "arXiv preprint arXiv:1612.06037,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Non-negative matrix factorization (NMF) is a ubiquitous technique for data analysis where one attempts to factorize a measurement matrix X into the product of non-negative matrices U,V (Lee and Seung, 1999). This simple problem has applications in recommender systems (Luo et al., 2014), scientific analysis (Berne et al., 2007; Trindade et al., 2017), computer vision (Gillis, 2012), internet distance prediction (Mao et al., 2006), audio processing (Schmidt et al., 2007) and many more domains. Often, the non-negativity is crucial for interpretability, for example, in the context of crystallography, the light sources, which are represented as matrix factors, have non-negative intensity (Suram et al., 2016).\nLike many other non-convex optimization problems, finding the exact solution to NMF is NP-hard (Pardalos and Vavasis, 1991; Vavasis, 2009). NMF’s tremendous practical success is however at odds with such worst-case analysis, and simple algorithms based upon gradient descent are known to find good solutions in real-world settings (Lee and Seung, 2001). At the time when NMF was proposed, most analyses of optimization problems within machine learning focused on convex formulations such as SVMs (Cortes and Vapnik, 1995), but owing to the success of neural networks, non-convex optimization has experienced a resurgence in interest. Here, we revisit NMF from a fresh perspective, utilizing recent tools developed in the context of optimization in deep learning. Specifically, our main inspiration is the recent work of Kleinberg et al. (2018) and Zhou et al. (2019) that empirically demonstrate that gradients typically point towards the final minimizer for neural networks trained on real-world datasets and analyze the implications of such convexity properties for efficient optimization.\nIn this paper, we show theoretically and empirically that a similar property called star-convexity holds in NMF. From a theoretical perspective, we consider an NMF instance with planted solution, inspired by the stochastic block model for social networks (Holland et al., 1983; Decelle et al., 2011) and the planted clique problem studied in sum-of-squares literature (Barak et al., 2016). We prove that between two points the loss is convex with high probability, and conclude that the loss surface is star-convex in the typical case — even if the loss is computed over unobserved data. From an empirical perspective, we verify that our theoretical results hold for an extensive collection\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\nof real-world datasets spanning collaborative filtering (Zhou et al., 2008; Kula, 2017; Harper and Konstan, 2016), signal decomposition (Zhu, 2016; Li and Ngom, 2013; Li et al., 2001; Erichson et al., 2018) and audio processing (Flenner and Hunter, 2017; canto Foundation), and demonstrate that the star-convex behavior results in efficient optimization. Finally, we show that star-convex behavior becomes more likely with growing number of parameters, suggesting that a similar result may hold as neural networks become wider. We provide supporting empirical evidence for this hypothesis on modern network architectures." }, { "heading": "2 NMF AND STAR-CONVEXITY", "text": "The aim of NMF is to decompose some large measurement matrix X P Rnˆm into two non-negative matrices U P Rnˆr` and V P Rrˆm` such that X « UV. The canonical formulation of NMF is\nmin U,Vě0 `pU,Vq, where `pU,Vq “ 1 2 }UV ´X}2F (1)\nNMF is commonly used in recommender systems where entries pi, jq of X for example correspond to the rating user i has given to movie j (Luo et al., 2014). In such settings, data might be missing as all users have not rated all movies. In those cases, it is common to only consider the loss over observed data (Zhang et al., 2006; Candès and Recht, 2009). We let 1̂pi,jq be an indicator variable that is 1 if entry pi, jq is \"observed\" and 0 otherwise. The loss function is then\nmin U,Vě0 `pU,Vq “ 1 2 ÿ\ni,j\n1̂pi,jq\nˆ\n“ UV ‰ ij ´Xij\n˙2\n(2)\nNMF is similar to PCA which admits spectral strategies; however, the non-negative constraints in NMF prevent such solutions and result in NP-hardness (Vavasis, 2009). Work on the computational complexity of NMF has shown that the problem is tractable for small constant dimensions r via algebraic methods (Arora et al., 2012). In practice, however, these algorithms are not used, and simple variants of gradient descent, possibly via multiplicative updates (Lee and Seung, 2001), are popular and are known to work reliably (Koren et al., 2009). This gap between theoretical hardness and practical performance is also found in deep learning. Optimizing neural networks is NP-hard in general (Blum and Rivest, 1989), but in practice they can be optimized with simple stochastic gradient descent algorithms to outmatch humans in tasks such as face verification (Lu and Tang, 2015) and playing Atari-games (Mnih et al., 2015). Recent work on understanding the geometry of neural network loss surfaces has promoted the idea of convexity properties. The work of Izmailov et al. (2018) shows that the loss surface is convex around the local optimum, while Zhou et al. (2019) and Kleinberg et al. (2018) show that the gradients during optimization typically point towards the local minima the network eventually converges to. Of central importance in this line of work is star-convexity, which is a property of a function f that guarantees that it is convex along straight paths towards the optima x˚. See Figure 2 for an example. Formally, it is defined as follows.\nDefinition 1. A function f : Rn Ñ R is star-convex towards x˚ if for all λ P r0, 1s and x P Rn, we have f ` λx` p1´ λq x˚ ˘ ď λfpxq ` p1´ λqfpx˚q.\nOptimizing star-convex functions can be done in polynomial time (Lee and Valiant, 2016), in Kleinberg et al. (2018) it is shown how the function only needs to be star-convex under a natural noise model. NMF is not star-convex in general as it is NP-hard, however, it is natural to conjecture that NMF is starconvex in the typical case. Such a property could explain the practical success of NMF on real-world datasets, which is not worst-case. This will be the working hypothesis of this paper, where the typical case is formalized in Theorem 1. Indeed, one can verify numerically that NMF is typically star-convex for natural distributions and realistically sized matrices, see Figure 1 where we consider a rank 10 decomposition of p100, 100qmatrices with iid half-normal entries and a planted solution, sampled as per Assumption 1 given in the next section. Following sections will be dedicated to proving that NMF is starconvex with high probability in a planted model, and to confirm that this phenomenon generalizes to datasets from the real world, which are far from worst-case." }, { "heading": "3 PROVING TYPICAL-CASE STAR-CONVEXITY", "text": "Our aim now is to prove that the NMF loss-function is typically star-convex for natural non-worstcase distribution of NMF instances. We will consider a slighly weaker notation of star-convexity where f ` λx ` p1 ´ λq x˚ ˘\nď λfpxq ` p1 ´ λqfpx˚q holds not for all x, but for random x with high probability. This is in fact the best achievable, an NMF instance with u1 “ 1, u˚ “ 0 and v1 “ 0, v˚ “ 1 is not star-convex. Our results hold with high probability in high dimensions, similar to Dvoretzky’s theorem in convex geometry (Dvoredsky, 1961; Davis).\nInspired by the stochastic block model of social networks (Holland et al., 1983; Decelle et al., 2011) and the planted clique problem (Barak et al., 2016), we will focus on a setting with a planted random solution. In section 4 we verify conclusions drawn from this model transfers to real-world datasets. We will assume that there is a planted optimal solution pU˚,V˚q, where entries of these matrices are sampled iid from a class of distributions with good concentration properties that include the halfnormal distribution and bounded distributions. As is standard in random matrix theory (Vershynin, 2010), we will develop non-asymptotic results that hold with a probability that grows as the matrices of shape pn, rq and pr,mq become large. For this reason, we will need to specify how r and m depend on n.\nAssumption 1. For pU,Vq P Rnˆr ˆ Rrˆm we assume that the entries of the matrices U,V are sampled iid from a continuous distribution with non-negative support that either piq is bounded or piiq can be expressed as a 1-Lipschitz function of a Gaussian distribution. As n Ñ 8, we assume that r grows as nγ up to a constant factor for γ P r1{2, 1s, and m as n up to a constant factor.\nWe are now ready to state our main results, that the loss function equation 1 is convex on a straight line between points samples as per Assumption 1, and thus satisfy our slightly weaker notion of star-convexity, with high probability. The probability increases as the size of the problem increases, suggesting a surprising benefit of high dimensionality. We also show similar results for the loss function of equation 2 with unobserved data under the assumption that the event that any entry is observed occurs independently with constant probability p. Below we sketch the proof idea and key ingredients, the formal proof is given in Appendix D.\nTheorem 1. (Main) Let matrices U1,V1,U2,V2,U˚,V˚ be sampled according to Assumption 1. Then there exists positive constants c1, c2, such that with probability ě 1 ´ c1 expp´c2n1{3q, the loss function `pU,Vq in equation 1 is convex on the straight line pU1,V1q Ñ pU2,V2q. The same holds along the line pU1,V1q Ñ pU˚,V˚q. It also holds if any entry pi, jq is observed independently with constant probability p, but with probability ě 1´ c1 expp´c2r1{3q." }, { "heading": "3.1 PROOF STRATEGY", "text": "Let us parametrize the NMF solution one gets along the line pU2,V2q Ñ pU1,V1q as\nX̂pλq “ ˆ λU1 ` p1´ λqU2 ˙ˆ λV1 ` p1´ λqV2 ˙\nProving Theorem 1 amounts to showing that the loss function `pλq “ 12}X̂pλq ´ X} 2 F is convex in λ with high probability, our strategy is to show that its second-derivate is non-negative. For fixed matrices U1,U2,U˚,V1,V2,V˚, the function `pλq is a fourth-degree polynomial in λ, so its second derivate w.r.t. λ will be a second-degree polynomial in λ. For a general second-degree polynomial ppxq “ ax2 ` bx ` c we have ppxq “ 1a “` ax ` b2 ˘2 ` ` ac ´ b 2 4 ˘‰\n. If a ą 0, which in the case here (see Appendix D), showing that it’s positive could be done by showing ac ě b 2\n4 . This is equivalent to showing that\n2}W2}2F ` }W1}2F ` 2xW0,W2y ˘ ě 3 ` xW1,W2y ˘2\n(3) Where the matrices W0,W1,W2 are given as W0 “ U2V2 ´U˚V˚, W1 “ ` U1 ´U2 ˘\nV2 ` U2 ` V1´V2 ˘ , W2 “ ` U1´U2 ˘` V1´V2 ˘\n. By slight abuse of notation, we have used xA,By to denote TrpABT q for matrices A,B of the same shape. By exchanging terms in equation 3 by their means one gets\n2p4rmnσ4q ` 6rmnσ4 ` 4rmnµ2varσ2 ` 2rmnσ4 ˘ ě 3 ` ´ 4rmnσ4 ˘2\n(4)\nHere, σ2 is the variance of the distribution of the entries in the matrices, while µ is the mean. By just counting terms of order prmnσ4q2, we see that the LHS has 64 such terms while the RHS has only 48. Thus, if all matrices W0,W1 and W2 would exactly be equal to their mean, inequality equation 3 would hold. In proving that it holds in general, we will use concentration of measure results from random matrix theory to show that the terms will be concentrated around their means and that large deviations are exponentially unlikely.\nConcentration of measure Consider the matrix W2 “ ` U1 ´U2 ˘` V1 ´ V2 ˘\n. Given that all matrices are iid we can center the variables such that W2 “ ` U1 ´ U2 ˘` V1 ´ V2 ˘ “ `\nŪ1 ´ Ū2 ˘` V̄1´ V̄2 ˘\n, where the bar denotes the centered matrices. The term }W2}2F can then be written as Tr ` V̄1´V̄2 ˘T ` Ū1´Ū2 ˘T ` Ū1´Ū2 ˘` V̄1´V̄2 ˘\n. Given that all matrix entries are independent as per Assumption 1, we would expect some concentration of measure to hold. Bernstein-type inequalities turns out to be too weak for our purposes, but the field of random matrix theory offer stronger results for matrices with independent sub-Gaussian entries (Ahlswede and Winter, 2002; Tropp, 2012; Guionnet et al., 2000; Meckes and Szarek, 2012). Via such results one can achieve the following concentration result, see Appendix D.\nP ` ˇ ˇ}W2}2F ´ E “ }W2}2F ‰ ˇ ˇ ą t r n2 ˘ ď c3 exp ` ´ c4 minpt2, t1{2qn ˘\n(5) where c3, c4 are positive constants. In some expression we will however not be able to center all variables, and for such expression one gets similar but slightly weaker concentration results where the exponent scales as n1{3 instead of n, see Appendix D.\nProof sketch Given that E “ }W2}2F “ 4rmnσ4, equation 5 says that the probability that the term deviates from its mean by a relative factor is less than c3 exp ` ´ c5 2n ˘\nfor some small . By applying similar arguments to terms xW0,W2y and xW1,W2y, one can show that the probability that they will deviate by a relative factor is less than c6 expp´c7 2n1{3q. A problematic term is }W1}2F which contains a term of the type Tr ` V̄1 ´ V̄2 ˘T µT1 µ1 ` V̄1 ´ V̄2 ˘\nwhich has weak concentration properties. Matrices of type ATA are psd, and psd matrices have non-negative trace. Hence, this term will be non-negative, and since it occurs on the LHS of equation 3, we can simply omit it to lower bound the convexity. Using union bound, we bound the probability that at least on term deviate with a relative factor by c1 expp´c8 2n1{3q for positive constants c1, c8. Now, set “ 0.01 If neither variable deviates by a factor of more than 0.01, then equation 10 still holds since 0.992 ˚ 64 ě 1.012 ˚ 48. Thus, inequality is violated with probability at most c1 exp ` ´ c2n1{3 ˘\nfor positive c1, c2.\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\nProof sketch for unobserved data If the entries in equation 2 are \"observed\" independently with probability p, for fixed matrices U1,U2,U˚,V1,V2,V˚ such that Theorem 1 hold, we have\nEr`2pλqs “ E „ ÿ\nij\n1̂pi,jq\nˆ\nX̂1 2 ij ` X̂2ijpX̂ij ´Xijq ˙ “ p ÿ\nij\nˆ\nX̂1 2 ij ` X̂2ijpX̂ij ´Xijq ˙ ě 0\nThus, the expectation of `2pλq is convex. To show that it is convex with high probability, one first observes that with high probability, no entry pi, jq in `2pλq is particularly large. Assuming this holds via union bound, for fix matrices U1,U2,U˚,V1,V2,V˚ with elements that are \"observed\" independently with probability p, one gets that `2pλq is concentrated around its convex mean via Hoeffding bounds (Hoeffding, 1994)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 VERIFYING THEORETICAL PREDICTIONS", "text": "To verify that the conclusions from our planted model hold more broadly, we now consider realworld datasets previously studied in NMF literature. Some have ranks outside the scope of our theoretical model but still display star-convexity properties, indicating that it might be a general phenomenon. We focus on a handful of representative datasets spanning image analysis, scientific applications and collaborative filtering. In Table 1, we list these datasets together with references and sparsity, the decomposition rank we use is based upon values previously used in the literature.\nWe perform a non-negative matrix factorization via gradient descent, starting with randomly initialized data. To enable comparison between datasets, we scale all data matrices so that the variance of\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\n<latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit><latexit sha1_base64=\"/JI5iynvNWOEKZjLq6ujHWhdsHw=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ==</latexit>\nobserved entries is 1 and divide the loss function by the number of (observed) entries. The initialization is performed with half-normal distribution scaled so that the means match with the dataset. For simplicity we use the same gradient descent parameters for all datasets, a learning rate of 1e´7 with 1e4 gradient steps which gives convergence for all datasets. For the collaborative filtering datasets (movielens, netflix, and goodbooks) we have unobserved ratings, as is standard in NMF we only compute the loss over observed ratings (Zhang et al., 2006). In Figure 3, we plot the loss function between two random points drawn from the initialization distribution and see that the loss is convex. In Figure 4 we plot the loss function from an initialization point to an independent local optima. These results agree with our planted model; the NMF loss-surface of real-world datasets seem to be largely convex along straight paths. Gradient descent of course only gives local optima but these still display nice star-convexity properties, however, finding the true global optima remains a challenge." }, { "heading": "4.2 ABLATION EXPERIMENTS", "text": "Theorem 1 suggests that as the matrices become larger, NMF is increasingly likely to be star-convex. We are interested to see if this is the case for our real-world datasets, and to this end, we perform ablation experiments varying the dimensions of the matrices. We decrease the number of data points n by subsampling rows and columns uniformly randomly. Our measure of curvature at a point x, given some optimal solution x˚, is\nαpxq “ min λPr0,1s\n`2 ` λx˚ ` p1´ λqx ˘\n(6)\nNote that α ě 0 implies star-convexity. In practice, x and x˚ are obtained by random initialization and gradient descent as per the earlier section. For each dataset and subsample rate, we find 100 optima and evaluate the curvature from 50 random points, thus giving 5000 samples of α. Figure 5 show how the relative deviation, σµ , of α decrease as the dataset becomes larger. As shown in Appendix C, the curvature is always positive, thus the curvature becomes increasingly concentrated around its positive mean for larger matrices. We are also interested in investigating whether the loss surface is star-convex during training. In Figure 6, we show that the cosine similarity between the negative gradients ´∇`pU,Vq and the straight line from x0 to x˚ is always positive, and how the loss surface is always star-convex during training. As per (Lee and Valiant, 2016), star-convexity implies efficient optimization. In Figure 7, we illustrate the spectrum of singular values for U˚, found for the birdsong dataset, and a random matrix of the same shape with iid entries from a halfnormal distribution. The spectra are similar, and while U˚ is not random, it seems to share structural qualities with the random matrices used in our proofs. See Appendix C for figures on more datasets." }, { "heading": "4.3 IMPLICATIONS FOR NEURAL NETWORKS", "text": "We have seen how increasing the number of parameters makes NMF problems more likely to be starconvex, and Figure 5 shows how more parameters make the curvature tend towards its positive mean. Theorem 1 suggest that this is a result of concentration of measure, and it is natural to believe that similar phenomenon would persist in the context of neural networks. It has previously been observed how neural networks are locally convex (Izmailov et al., 2018; Kleinberg et al., 2018), and also how overparameterization is important in deep learning (Arora et al., 2018; Frankle and Carbin, 2018). Based upon our observations in NMF, we hypothesise that a major benefit of overparameterization is in making the loss surface more convex via concentration of measure w.r.t. the weights.\nTo verify this hypothesis, we consider image classification on CIFAR10 (Krizhevsky et al., 2014) with Resnet networks (He et al., 2016) trained with standard parameters (see Appendix B). Networks are typically only locally convex, a property we quantify as the length of subsets of random \"lines\" in parameter space, along which the loss is convex. We sample a random direction r and then consider a subspace of length one along this direction centered around the current parameters w. We then define the convexity length scale as the length of the maximal sub-interval rλ1, λ2s containing 0 on which `pw` λrq is convex. Directions are sampled from Gaussian distributions and then normalized for each filter f to have the same norm as the weights of f . Table 2 show how this length-scale of local convexity varies with depth, width, and training, where width is varied by multiplying the number of channels by k. Increased width indeed makes the landscape increasingly locally convex for all but the most shallow networks, results that support our hypothesis. See Appendix B for extended tables." }, { "heading": "5 RELATED WORK", "text": "Our work was initially motivated by findings in Kleinberg et al. (2018) and Zhou et al. (2019) regarding star-convexity in neural networks. As the success of deep learning has become apparent, researchers have empirically investigated neural network trained on real-world datasets (Li and Yuan, 2017; Keskar et al., 2016). In the context of sharp vs flat local minima (Keskar et al., 2016), Li et al. (2018) illustrate how width improved flatness in a Resnet network, an observation Table 2 quantifies. In general, real-world datasets seem to be far more well-behaved than the worst case, given that training neural networks are NP-hard (Blum and Rivest, 1989). There is extensive work on nonworst-case analysis of algorithms and machine learning models, and on what problem distributions can guarantee tractability which addresses such gaps (Bilu and Linial, 2012; Afshani et al., 2017). On the positive side Arora et al. (2012) has shown an exact algorithm for NMF that runs in polynomial time for small constant r, and there are positive results for ’separable’ NMF (Donoho and Stodden, 2004). Compressive sensing (Candes et al., 2004), smoothed analysis (Spielman and Teng, 2004) and problems with \"planted\" solutions (Barak et al., 2016) (Holland et al., 1983) similarly makes assumptions on the input. Researchers have also been interested in theoretical convergence properties of shallow and linear networks (Lu and Kawaguchi, 2017), where a common theme is that functions with only saddle points and global minima can be effectively optimized (Ge et al., 2015). In analysis of neural networks, random matrix theory often plays a role, directly or indirectly (Choromanska et al., 2015; Glorot and Bengio, 2010; Pennington and Bahri, 2017; Xiao et al., 2018)." }, { "heading": "6 CONCLUSIONS", "text": "This paper revisits NMF, a non-convex optimization problem in machine learning. We have shown that NMF is typically star-convex, provably for a natural average-case model and empirically on an extensive set of real-world datasets. Our results support the counter-intuitive observation that optimization might sometimes be easier in higher dimensions due to concentration of measure effects." }, { "heading": "A DATASETS", "text": "Here we describe the datasets we used to evaluate our results on and provide references for them.\nBirdsong: time series for bird calls, see (Flenner and Hunter, 2017; canto Foundation). Extragalactic: dataset for extragalactic light sources, see (Zhu, 2016). Goodbooks: user ratings for books, see (Kula, 2017; Bourgin, 2018). Metabolic: Yest metabolic activity time series, see (Li and Ngom, 2013). Movielens: user ratings for movies , see (Harper and Konstan, 2016; Li et al., 2012; Zhang et al., 2006). Netflix: User ratings for movies, see (Zhou et al., 2008; Ampazis, 2008; Li et al., 2012). ORL faces: black and white facial images, see (Li et al., 2001; Hoyer, 2004). Satellite: satellite urban spectral-image, see (Erichson et al., 2018; Gillis, 2014)." }, { "heading": "B DNN PARAMETERS", "text": "Table 3 presents the hyper-parameters used for training the neural networks. Data augmentation consists of randomized cropping with 4-padding and randomized horizontal flipping. The learning rate is decreased by a factor of 10 after epochs p150, 225, 275q. We use the standard Resnet architecture for CIFAR10 with three segments with p16, 32, 64q channels each, and use the standard blocks (i.e. not bottleneck blocks) (He et al., 2016)." }, { "heading": "C EXTENDED NMF FIGURES", "text": "" }, { "heading": "D PROOFS", "text": "" }, { "heading": "D.1 OVERVIEW", "text": "We here provide the proof of Theorem 1. In section D.2 we present the notation and derive the main inequality. In Section D.3, we prove that the loss function is convex between any two random points with high probability. Section D.4, we prove that this also holds towards the planted solution. For the case with unobserved data, we prove that the loss function is convex between any two random points with high probability in Section D.5. That this holds towards the planted solution in the case of unobserved data follows from previous sections, and the proof of this fact omitted. Together, these results form Theorem 1 as stated in the main text. Section D.6 presents the concentration\nresults we need. The proofs include considerable amounts of elementary algebraic calculations. For completeness and ease of reference, we present these in Appendix E." }, { "heading": "D.2 NOTATION AND THE MAIN INEQUALITY", "text": "The proof will involve various constants c0, c1, c2 that do not depend on n. For convenience, we will let exact values be context dependent and will reuse the symbols c0, c1, c2 and so on. The proofs will not depend on the exact values of these constants. Constants in our main result can be improved slightly, but at the cost of a more complicated expression. We present the non-optimized version. We will let boldface capital letters denote matrices, boldface lower-case letter vectors and non-boldface letters denote scalars. Let U1, U2, U˚ be n-by-r matrices and V1, V2, V˚ r-by-m matrices sampled as per Assumption 1. Without loss of generality, we assume m “ ncm where 0 ă cm ď 1. We will let µvar be the expectation, and σ2 be the variance of the entry-wise distributions. We will denote the centered variables by Ū1, V̄1 and so on, and the mean matrix of U and V resp as 1nˆr, 1rˆm. Then U “ Ū ` 1nˆr. By slight abuse of notation, we will use the convention xX,Yy “ Tr ` XYT ˘\nfor matrices X,Y of the same shape. The loss function we wish to minimize, subject to non-negativity constraints, is\n`pU,Vq “ 1 2 }UV ´X}2F (7)\nWhen we interpolate the solution UV between any fixed two points pU1,V1q and pU2,V2q, we write\nX̂pλq “ ` λU1 ` p1´ λqU2 ˘` λV1 ` p1´ λqV2 ˘\nThe loss is then\n`pλq “ 1 2 }X̂pλq ´X}2F\nProving Theorem 1 in the case of two random points and completely observed data amounts to showing that the continuous function `pλq is convex in λ on the interval r0, 1s with high probability. We will show that the second-derivate is non-negative. For fixed matrices U1,U2,U˚,V1,V2,V˚, the loss `pλq is a fourth-degree polynomial in λ, so its second derivate w.r.t. λ will be a seconddegree polynomial. For a general second-degree polynomial ppxq “ ax2 ` bx` c we have ppxq “ 1 a “` ax ` b2 ˘2 ` ` ac ´ b 2 4 ˘‰\n. By inspecting equation 8 one can conclude that a ą 0. Showing that the polynomial is non-negative can then be done by showing ac ě b 2\n4 . Let X̂ijpλq be element i, j of the matrix X̂pλq. The derivates of `pλq are then\n`pλq “ 1 2 ÿ\nij\n` X̂ijpλq ´Xij ˘2\n`1pλq “ ÿ\nij\n` X̂ijpλq ´Xij ˘ X̂1ijpλq\n`2pλq “ ÿ\nij\nX̂2ijpλq ` X̂ijpλq ´Xij ˘ ` X̂1ijpλq2\n`3pλq “ 3 ÿ\nij\nX̂2ijpλqX̂1ijpλq\n`4pλq “ 3 ÿ\nij\nX̂2ijpλq2 (8)\nWhere we have\nX̂pλq “ ` λU1 ` p1´ λqU2 ˘` λV1 ` p1´ λqV2 ˘\nX̂1pλq “ ˆ U1 ´U2 ˙ˆ λV1 ` p1´ λqV2 ˙ ` ˆ λU1 ` p1´ λqU2 ˙ˆ V1 ´V2 ˙\nX̂2pλq “ 2 ˆ U1 ´U2 ˙ˆ V1 ´V2 ˙\nLet us define\nW0 “ U2V2 ´U˚V˚\nW1 “ ˆ U1 ´U2 ˙ V2 `U2 ˆ V1 ´V2 ˙\nW2 “ ˆ U1 ´U2 ˙ˆ V1 ´V2 ˙\nBy expanding `2pλq in a McLaurin series around 0, we see that proving non-negativity of `2pλq can be done by proving 2`p0q2`4p0q ě `3p0q2. Thus, proving Theorem 3 then amounts to showing that\n2}W2}2F ` }W1}2F ` 2xW0,W2y ˘ ě 3 ` xW1,W2y ˘2\n(9)" }, { "heading": "D.3 PROVING THE INEQUALITY BETWEEN RANDOM POINTS", "text": "Theorem 2. Let pU1,V1q, pU2,V2q and the planted solution pU˚,V˚q be sampled as per Assumption 1. The loss function equation 7 is convex on a straight line connecting pU1,V1q and pU2,V2q with probability ě 1´ c1 exp ` ´ c2n1{3 ˘ for positive constants c1, c2.\nProof. As per section D.2, we only need to prove that equation 9 holds with probability ě 1 ´ c1 exp ` ´ c2n1{3 ˘\n. Let us exhange all terms in equation 9 with their means, which are given in Facts 7, 5, 6 and 8\n2p4rmnσ4q ` 6rmnσ4 ` 4rmnµ2varσ2 ` 2rmnσ4 ˘ ě 3 ` ´ 4rmnσ4 ˘2\n(10)\nClearly, equation 9 would hold if one naively exchanges the terms for their means. Just counting terms of order rmnσ4 we have 64 on the LHS and 48 on the RHS. We need to show that deviations sufficiently large to violate the inequality are unlikely. Our strategy will be to show that the factors are exponentially unlikely to deviate substantially individually, and then use union bounds. The concentration results and their derivations are detailed in Section D.6.\nConsider the random variable }W2}2F . As per equation 14, }W2}2F can be rewritten into polynomials of centered random matrices where we have to pad the matrices with rows/columns of all zeros. We can then apply the concentration result of Fact 1 to conclude that the probability that }W2}2F deviates from its mean by a factor p1` q is no more than c1 expp´c2 2n ˘ .\nNow, consider the random variable }W1}2F . As in the proof of Fact 7, }W1}2F can be rewritten as the sum of expression equation 16 plus terms of centered variables. The latter expressions are polynomials in centered random matrices, after padding some columns/rows with zeros we can apply Fact 1 to conclude that the probability that it deviates from its mean by a factor p1 ` q is no more than c1 expp´c2 2n ˘ . Consider the expression equation 16. Clearly the expressions 1Trˆm ` Ū1 ´ Ū2 ˘T ` Ū1 ´ Ū2 ˘ 1rˆm and ` V̄1 ´ V̄2 ˘T 1Tnˆr1nˆr ` V̄1 ´ V̄2 ˘\nwill be psd, and thus their trace will be nonnegative. We can thus lower bound the LHS of equation 10 by omitting the terms in equation 16, and will thus only consider terms scaling as rn2σ4.\nNow, consider the random variable xW0,W2y. As per equation 19, the variable xW0,W2y can also be separated into two parts. After padding with some zero rows/columns to obtain square matrices, the first is a polynomial in centered matrices for which Fact 1 applies which again bounds the probability of deviations by a factor p1` q by c1 expp´c2 2n ˘\n. The second one is a sum of the type 1X1X2X3 modulo permutations, under which the trace is invariant. We can thus apply Fact 2 which bounds the probability for deviations of size xW0,W2y by c1 exp ` ´ c2n1{3 ˘ .\nAt last, consider the random variable xW1,W2y. By equation 18 can, just like xW0,W2y, be written as the sum of a polynomial in centered random matrices, and sums of variables of the type 1X1X2X3 modulo permutations. We can thus apply the same argument as for xW0,W2y to bounds the probability for deviations of relative size by c1 exp ` ´ c2n1{3 ˘ .\nWe can now apply union bound on the variables }W1}2F , }W2}2F , xW0,W2y and xW1,W2y, modulu term equation 16, deviating from their respective mean by a factor . The probability that either of them does is no more than c1 exp ` ´ c2 2n1{3 ˘\nby union bounds. Now, set “ 0.01 If neither variable deviates by a factor of more than 0.01, then equation 10 still holds if we only count terms scaling as rmrσ4 since 0.992 ˚ 64 ě 1.012 ˚ 48. Thus, inequality is violated with probability at most c1 exp ` ´ c2n1{3 ˘ for positive constants c1, c2." }, { "heading": "D.4 PROVING THE INEQUALITY TOWARDS THE PLANTED SOLUTION", "text": "Theorem 3. Let pU1,V1q and the planted solution pU˚,V˚q be sampled as per Assumption 1. The loss function of equation 7 is convex on a straight line connecting pU1,V1q and pU˚,V˚q with probability ě 1´ c1 exp ` ´ c2n1{3 ˘ for constants c1, c2 ą 0.\nProof. To prove this, we can repeat the argument used in the proof of Theorem 2. The terms }W2}2, }W1}2 and xW1,W2y will have the same mean and concentration since U1 and U˚ are identically distributed. The only difference will be the term xW0,W2y, which now will have mean 2rmnσ4 per Fact 10. This will only make the LHS of equation 10 larger, so the inequality will still hold and the concentration of measure properties will ensure that equation 9 holds with high probability as in the proof of theorem 2." }, { "heading": "D.5 PROVING THE INEQUALITY BETWEEN RANDOM POINTS FOR UNOBSERVED DATA", "text": "As explained in the main text, we will assume that entries are observed with independent probability p. We formalize this in the following assumption\nAssumption 2. For any n, let 1̂ P t0, 1unˆm have entries iid drawn from a Bernoulli distribution with probability p, which is constant w.r.t n.\nGiven any set of observations 1̂, we define our loss function as\n`pU,Vq “ ÿ\ni,j\n1̂pi,jq\nˆ\n“ UV ‰ ij ´Xij\n˙2\n(11)\nTheorem 4. Let pU1,V1q, pU2,V2q and the planted solution pU˚,V˚q be sampled as per Assumption 1. Let the set of observations 1̂ be sampled as per Assumption 2. The loss function equation 11 is convex on a straight line connecting pU1,V1q and pU2,V2q with probability ě 1´ c1 exp ` ´ c2r1{3 ˘ for constants c1, c2 ą 0.\nProof. If no entries are observed, we can claim trivially that the function is convex. If some entries are observed, then a ą 0 since X̂2ijpλq2 ą 0 for any pi, jq with probability 1. The second derivate of equation 11 can be written as\n`2pλq “ ÿ\nij\n1̂pi,jq\nˆ\nX̂1 2 ij ` X̂2ijpX̂ij ´Xijq ˙\nUsing Lemma 1, we can assume by union bound that no entry in X̂, X̂1, X̂2,X is larger than Opr2{3q with probabilityď c1n2 exp ` ´ c2r1{3 ˘ . Note that this estimate gives the exp ` ´ c2r1{3 ˘ scaling of the proof. Let assume that the entries are bounded like this, then no entry in X̂1 2\nij`X̂2ijpX̂ij´Xijq will have magnitude more than Opr4{3q for any λ. Standard Hoeffding bounds (Hoeffding, 1994) states that for variables txiu bounded in rAi, Bis we have\nP\nˆ ˇ\nˇ\nˇ\nˇ\n1\nn\nn ÿ\ni\nxi\nˇ\nˇ\nˇ\nˇ\ně t ˙ ď exp ˆ ´ 2n 2t2\nřn i pbi ´ aiq2\n˙\nLet us define\nypλq “ ÿ\nij\nX̂1 2\nijpλq ` X̂2ijpX̂ijpλq ´Xijq\nLet us now consider fixed matrices U1,U2,U˚,V1,V2,V˚ satisfying Lemma 1, but let 1̂ remain a random variable. We can then consider the product of any two variables out of X̂ij , X̂1ij , X̂ 2\nij ,Xij as fixed constants in the interval r´c1r4{3, c1r4{3s for some constant c1. We note that ypλq is the\nsum of cmn2 variables, each of size at most Opr4{3q. Now ypλq is a sum of independent variables and Hoeffdinger applies with pbi ´ aiq2 ď Opr8{3q for all i. Taking t “ cr2{3 gives us\nP\nˆ ˇ\nˇ\nˇ\nˇ\nypλq ´ Erypλqs ˇ ˇ ˇ\nˇ\ně cr2{3n2 ˙ ď exp ˆ ´ 2c 2cmn 4r4{3\nn2r8{3\n˙\nď exp ` ´ cn2{3 ˘\nThis will hold for any λ. We can assume that this holds for let’s say 3 evenly spaced λ by union bound. Using union bounds, we assume the following: 1) ypλq ą c1rmn for all λ, which follows from the derivation of Theorem 2 with probability ě 1 ´ c1 expp´c2n1{3q. 2) |X̂ij | ď r2{3 for all i, j via Lemma 1. 3) |Y pλq ´ Erypλqs| ď cr2{3n2 for 3 evenly spaced λi by the above equation. Now, since the `2pλq is a second-degree polynomial, if it’s close to its expectation at 3 places, it must be close it its expectation everywhere. Formally, we can define the second-degree polynomial ŷpλq “ ypλq ´ Erypλqs. Now, our union bounds states that |ŷpλiq| ă cr2{3n2 for three evenly spaced λi. Now, let the three coefficients of the second-degree polynomial be described by the vector p, its values at λi by the vector v and let the matrix A describe the values of the monomials 1, λ, λ2 at λi. We then have Ap “ v. Now, A will have elements bounded by 1 but will be full rank, this implies that if v is bounded by cr2{3n2 so must p be. The fact that the coefficents in ŷ have absolute value bounded by Opr2{3n2q, together with the fact that the mean scales as rn2 gives us the following\nmin λ ypλq “ min λ ŷpλq ` Erypλqs ě min λ ŷpλq `min λ1 Erypλ1qs\ně min λ ŷpλq ` crn2 ě c1r2{3n2 ` crn2\nHere c is positive as per our assumptions, whereas c1 might be negative. Now, for sufficiently large r, the above quality will be non-negative. We can thus find new constants, based upon c and c1, to complete the proof." }, { "heading": "D.6 CONCENTRATION", "text": "This section contains the concentration results needed for the proof of Theorem 1. We will use results from random matrix theory, which has stronger results for matrices with independent entries that are concentrated (Ahlswede and Winter, 2002; Tropp, 2012; Guionnet et al., 2000). We will use the concentration results of Meckes and Szarek (2012) which relies on the notation of CCP – convex concentration property – which is a regularity condition. Definition 2. A random matrix X in a normed vector space satisfies CCP iff there exists positive constants c1, c2 such that P `ˇ ˇfpXq ´MfpXq ˇ ˇ ě t ˘ ď c1e´c2t 2\nfor all t and all convex 1-Lipschitz functions f . Here M is the median.\nOne can easily verify that independent Gaussian variables satisfies this (Ledoux, 2001), and it then follows that 1-Lipschitz functions of Gaussians also satisfies CCP. One important class here is the half-normal distribution which is just the absolute value of a Gaussuan variable, and absolute value is a 1-Lipschitz function. Any vector of independent bounded variables will also satisfy CCP (Meckes and Szarek, 2012).\nFact 1. Let P̂ be a polynomial of degree 4 in centered matrices Xi,Xj ,Xk,Xl otherwise sampled as per Assumption 1, where we allow i “ j, j “ l and any other arbitrary index relationship. Let z “ Tr P̂ pXi,Xj ,Xk,Xlq and let µ be then mean of z, then P ` |z ´ µ| ą t r n2 ˘ ď c1 exp `\n´ c2 minpt2, t1{2qn ˘ for positive constants c1, c2.\nProof. Any vector CCP where some components are zero will still be CCP, this we can pad the matrices Xi,Xj ,Xk,Xl so that they are square. We then invoke Theorem 1 of Meckes and Szarek (2012) which gives\nP\nˆ |z ´ µ| ą τn2 ˙ ď c1 exp ` ´ c2 minpτ2, n τ1{2q ˘\nSetting τ “ tr gives us\nP\nˆ |z ´ µ| ą t r n2 ˙ ď c1 exp ` ´ c2 minp t2 r2, t1{2 r1{2 nq ˘\nUsing our assumption r “ Opnγq for some γ P r1{2, 1s gives\nP\nˆ |z ´ µ| ą t r n2 ˙ ď c1 exp ` ´ c2 minpt2, t1{2qn ˘\nFact 2. Let z “ Tr ` 1XiXjXk ˘\nfor centered matrices Xi,Xj ,Xk otherwise sampled as per Assumption 1, and let µ be the mean of z. 1 is a matrix of all 1s of the appropriate shape. Then P ` |z ´ µ| ě t rn2 ˘ ď c1 exp ` ´ c2n1{3 minpt1{2, t2q ˘ for positive constants c1, c2.\nProof. As before, we can pad the matrices to be square while retaining the CCP property. We note that the singular value of the matrix 1 is at most n. This means that the shatten-norm }E 1}k is at most n for all k. Let us consider the matrix 1̂ “ 1\nn1{3 which then has shatten-norm }E 1̂}d is at most\nn2{3. For this matrix, Theorem 1 in Meckes and Szarek (2012) applies, so that we have\nP\n„ ˇ\nˇ\nˇ\nˇ\nTr\nˆ 1XiXjXk ´ µ ˙ ˇ ˇ ˇ\nˇ\ně tn2 ď c1 exp ` ´ c2 min ` t2, nt1{2 ˘˘\n(12)\nLet us take t “ t0rn´1{3 so that tn7{3 “ t0rn2. We then have\nˇ ˇTr ` 1̂XiXjXk ´ µ ˘ ˇ ˇ ě tn2 ðñ ˇ ˇTr ` 1XiXjXk ´ µ ˘ ˇ ˇ ě tn7{3\nðñ ˇ ˇTr ` 1XiXjXk ´ µ ˘ ˇ ˇ ě t0rn2\nBy substituting t “ t0rn´1{3 into equation 12 we then have\nP\nˆ\nˇ ˇTr ` 1XiXjXk ´ µ ˘ ˇ ˇ ě t0rn2 ˙\n(13)\nď c1 exp ` ´ c2 min ` t20r 2n´2{3, t 1{2 0 r 1{2n1´1{6 ˘˘\nRecall our assumption r “ c3nγ for γ P r1{2, 1s. We then have\nmin ` t20r 2n´2{3, t 1{2 0 r 1{2n1´1{6 ˘ ě min ` t 1{2 0 , t 2 0 ˘ n1{3\nPlugging this into equation 13 completes the proof\nLemma 1. With probabilityď c1n2 exp ` ´c2r1{3 ˘ , no entry in X̂ij , X̂1ij , X̂ 2\nij ,Xij has an absolute value larger than cr2{3, for some positive constant c.\nProof. For a fixed i, j consider the terms X̂ij , X̂1ij , X̂ 2\nij ,Xij . Fact 9 says that each one can be expressed as a constant number of variables a with zero mean of the form\na “ r ÿ\ni“1 vp1qi v p2q i\nSince the variables themselves are sub-gaussian, by Lemma 2.7.7 in Vershynin (2018) the product vp1qi v p2q i is sub-exponential. Thus, the variable a is a sum of iid sub-exponential variables, and Theorem 2.8.1 in Vershynin (2018) states that\nP ` ˇ ˇa ˇ ˇ ě t ˘ ď 2 exp ˆ ´ cmin „ t2 rK2 , t K ˙\nHere K is the orlicz norm } ¨ }Ψ1 which is a constant for our fixed distributions. In the above expression, let us set t “ cr2{3. This gives\nP\nˆ ˇ\nˇ\nˇ\nˇ\na´ Eras ˇ ˇ ˇ\nˇ\ně cr2{3 ˙ ď 2 exp ` ´ cr1{3 ˘\nWe note that there is a polynomial number of entries a, so we can do a polynomial number (w.r.t. n) of union bounds, which might change the constants, to show that no entry is further than c1r2{3 from its expectation for some positive c1." }, { "heading": "E ALGEBRAIC CALCULATIONS", "text": "" }, { "heading": "E.1 BASIC FACTS", "text": "Fact 3. E “ Tr ŪT ŪV̄V̄T ‰ “ rmnσ4\nProof. We have\nE “ Tr ŪT ŪV̄V̄T ‰ “ E „ ÿ\nijkl\nŪjiŪjkV̄klV̄il\n\nTaking the mean, and using linearity of expectations, we only have nonzero mean if i “ k. We get\n“ E „ ÿ\nijl\nŪjiŪjiV̄ilV̄il\n\nUsing linearity of expectation and independence, this becomes\nÿ\nijl\nE “ ŪjiŪji ‰ E “ V̄ilV̄il ‰\nFor a single matrix entry, E “ ŪjiŪji ‰ “ σ2. i goes over the columns of U of which there are r, j goes over the rows of Ū of which there are n and l goes over the columns of V̄ of which there are m, so we clearly get σ4rmn.\nFact 4. E „\nTr ŪT Ū1rˆm1Trˆm\n\n“ rmnµ2varσ2\nProof. We can can first contract 1rˆm1Trˆm “ 1mµ2var, where 1 is a r-by-r matrix of all ones. We then want to find\nE „ ÿ\nijk\nŪjiŪjk1ki \nNow since the variables are centered, the expectations becomes zero unless i “ k. The sum thus becomes\nE „ ÿ\nij\nŪjiŪji1ii “ nrσ2\nThis gives us the result of rmnµ2varσ 2." }, { "heading": "E.2 EXPECTATION", "text": "Fact 5. Er}W2}2s “ 4mnrσ4\nProof. Let us rewrite Er}W2}2s as\nETr „ˆ V1 ´V2 ˙Tˆ U1 ´U2 ˙Tˆ U1 ´U2 ˙ˆ V1 ´V2 ˙\nTrace and expectations are linear operators, so we can reorder them. Let us consider the matrix\nˆ V1 ´V2 ˙Tˆ U1 ´U2 ˙Tˆ U1 ´U2 ˙ˆ V1 ´V2 ˙\nWe can easily add and subtract the mean µ of all matrices which gives\nˆ V̄1 ´ V̄2 ˙Tˆ Ū1 ´ Ū2 ˙Tˆ Ū1 ´ Ū2 ˙ˆ V̄1 ´ V̄2 ˙\n(14)\nExpanding the parantheses gives us 16 matrices, however they will have zero mean unless both the U-matrices and V-matrices ar the same. For calculating the mean, we can thus only consider four matrices of the type VTUTUV. Permuting the indices cyclically and appealing to Fact 3 gives us\nEr}W2}2s “ 4mnrσ4 Fact 6. ErxW0,W2ys “ rmnσ4\nProof. We can write ErxW0,W2ys as\nETr „ˆ\nVT2 U T 2 ´ pV˚qT pU˚qT\n˙ˆ\n` U1 ´U2 ˘` V1 ´V2 ˘\n˙\nAgain, we want to add and subtract the matrix means 1nˆr, 1rˆm to get\nETr „ˆ “ V̄2 T Ū2 T`V̄2 T 1Tnˆr`1TrˆmŪ2 T`1Trˆm1Tnˆr ‰ ´ “ V̄˚ T Ū˚ T`V̄˚T 1Tnˆr`1TrˆmŪ˚ T`1Trˆm1Tnˆr ‰ ˙\n(15)\nˆ ˆ ` Ū1 ´ Ū2 ˘` V̄1 ´ V̄2 ˘ ˙\nAs before, we can remove terms which are linear in any centered variable. This removes matrices with index 1, 3 and leaves\nETr „ˆ V̄2 T Ū2 T ` V̄2 T 1Tnˆr ` 1TrˆmŪ2 T ˙ˆ Ū2V̄2 ˙\nApplying fact 3 to the only term that remain gives\nErxW0,W2ys “ ETr „ V̄2 T Ū2 T Ū2V̄2 “ rmnσ4\nFact 7. E}W1}2F “ 6rmnσ4 ` 4rmnµ2varσ2\nProof. We rewrite E}W1}2F as\nETr „ˆ\nVT2 ` U1 ´U2 ˘T ` ` V1 ´V2 ˘T UT2\n˙ˆ\n` U1 ´U2 ˘ V2 `U2 ` V1 ´V2 ˘\n˙\nWe will again want to center the variables, which gives\nETr „ˆ V̄2 T ` Ū1 ´ Ū2 ˘T ` 1Trˆm ` Ū1 ´ Ū2 ˘T ` ` Ū1 ´ V̄2 ˘T Ū2 T ` ` V̄1 ´ V̄2 ˘T 1Tnˆr ˙\nˆ ˆ ` Ū1 ´ Ū2 ˘ V̄2 ` ` Ū1 ´ Ū2 ˘ 1rˆm ` Ū2 ` V̄1 ´ V̄2 ˘ ` 1nˆr ` V̄1 ´ V̄2 ˘ ˙\nLet us first consider the terms involving constants. We remove any terms linear in centered variables. We are then left with\n1Trˆm ` Ū1 ´ Ū2 ˘T ` Ū1 ´ Ū2 ˘ 1rˆm ` ` V̄1 ´ V̄2 ˘T 1Tnˆr1nˆr ` V̄1 ´ V̄2 ˘\n(16)\nAs before, any expressions linear in centered variables vanish when we take expectations. Using this and Fact 4 we get that the above expression is equal to\n“ 4rmnµ2varσ2\nWe now return to the terms without constants These are\nV̄2 T ` Ū1 ´ Ū2 ˘T ` Ū1 ´ Ū2 ˘ V̄2 ` ` V̄1 ´ V̄2 ˘T Ū2 T Ū2 ` V̄1 ´ V̄2 ˘\n(17)\n`V̄2 T ` Ū1 ´ Ū2 ˘T Ū2 ` V̄1 ´ V̄2 ˘ ` ` V̄1 ´ V̄2 ˘T Ū2 T ` Ū1 ´ Ū2 ˘ V̄2\nAs before, we remove any expressions linear in centered variables. As it turns out, we are left with 6 terms of the type V̄2 T Ū2 T Ū2V̄2. Applying Fact 3 gives us\nE “ }W1}2F ‰ “ 6rmnσ4 ` 4rmnµ2varσ2 Fact 8. ExW2,W1y “ ´4rmnσ4\nProof. Let us rewrite ExW2,W1y as\nETr „ˆ ` V1 ´V2 ˘T ` U1 ´U2 ˘T ˙ˆ ` U1 ´U2 ˘ V2 `U2 ` V1 ´V2 ˘ ˙\nAs per usual, we center the variables\n“ „ˆ ` V̄1´V̄2 ˘T ` Ū1´Ū2 ˘T ˙ˆ ` Ū1´Ū2 ˘ V̄2` ` Ū1´Ū2 ˘ 1rˆm`Ū2 ` V̄1´V̄2 ˘ `1nˆr ` V̄1´V̄2 ˘ ˙\n(18) Any terms involving the constant will be linear in a centered variable, and thus disappear under expectations. Equating like terms and using Fact 3, we get\n“ ´4E „ Tr V̄1 T Ū1 T Ū1V̄1 “ ´4rmnσ4" }, { "heading": "E.3 OTHER ALGEBRAIC FACTS", "text": "Fact 9. For any fixed i, j the entries of X̂, X̂1 and X̂2 are zero mean.\nProof. We have\nX̂pλq ´X “ ` λU1 ` p1´ λqU2 ˘` λV1 ` p1´ λqV2 ˘ ´U˚V˚\nX̂1pλq “ ` U1 ´U2 ˘` λV1 ` p1´ λqV2 ˘ ` ` λU1 ` p1´ λqU2 ˘` V1 ´V2 ˘\nX̂2pλq “ ` U1 ´U2 ˘` V1 ´V2 ˘\nThe fact that U1 and U2 are iid implies that pU1 ´U2q has zero mean, and the similar holds for V1 and V2. This implies that all terms of X̂1 and X̂2 are zero-mean. Using the fact that U1,U2 and U˚ are iid implies that\nE „ ` λU1 ` p1´ λqU2 ˘` λV1 ` p1´ λqV2 ˘ “ ErU˚V˚s\nThis, in turn, implies that ErX̂s “ 0. Fact 10. For U1 “ U˚ and V1 “ V˚, we have ErxW0,W2ys “ 2rmnσ4\nProof. We can write ErxW0,W2ys as\nETr „ˆ\nVT2 U T 2 ´ pV˚qT pU˚qT\n˙ˆ\n` U˚ ´U2 ˘` V˚ ´V2 ˘\n˙\nAgain, we want to add and subtract the matrix means 1nˆr, 1rˆm to get\nETr „ˆ “ V̄2 T Ū2 T`V̄2 T 1Tnˆr`1TrˆmŪ2 T`1Trˆm1Tnˆr ‰ ´ “ V̄˚ T Ū˚ T`V̄˚T 1Tnˆr`1TrˆmŪ˚ T`1Trˆm1Tnˆr ‰ ˙\n(19)\nˆ ˆ ` Ū˚ ´ Ū2 ˘` V̄˚ ´ V̄2 ˘ ˙\nAs before, we can remove terms which are linear in any centered variable. The only term that remain gives\nErxW0,W2ys “ ETr „ V̄2 T Ū2 T Ū2V̄2 ` ETr „ V̄˚ T Ū˚ T Ū˚V̄˚ “ 2rmnσ4" } ]
2,019
null
SP:37c8908c43beda4efc9db25216225f0106fe009c
[ "The authors describe a method for adversarially modifying a given (test) example that 1) still retains the correct label on the example, but 2) causes a model to make an incorrect prediction on it. The novelty of their proposed method is that their adversarial modifications are along a provided semantic axis (e.g., changing the color of someone's skin in a face recognition task) instead of the standard $L_p$ perturbations that the existing literature has focused on (e.g., making a very small change to each individual pixel). The adversarial examples that the authors construct, experimentally, are impressive and striking. I'd especially like to acknowledge the work that the authors put in to construct an anonymous link where they showcase results from their experiments. Thank you!", "This paper proposes to generate \"unrestricted adversarial examples\" via attribute-conditional image editing. Their method, SemanticAdv, leverages disentangled semantic factors and interpolates feature-map with higher freedom than attribute-space. Their adversarial optimization objectives combine both attack effectiveness and interpolation smoothness. They conduct extensive experiments for several tasks compared with CW-attack, showing broad applicability of the proposed method." ]
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee “subtle perturbation" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate “unrestricted adversarial examples". In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various “adversarial" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street-view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.
[ { "affiliations": [], "name": "PLES VIA" } ]
[ { "authors": [ "Yoshua Bengio", "Grégoire Mesnil", "Yann Dauphin", "Salah Rifai" ], "title": "Better mixing via deep representations", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Anand Bhattad", "Min Jin Chong", "Kaizhao Liang", "Bo Li", "David A Forsyth" ], "title": "Big but imperceptible adversarial perturbations via semantic manipulation", "venue": null, "year": 1904 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Tom B Brown", "Nicholas Carlini", "Chiyuan Zhang", "Catherine Olsson", "Paul Christiano", "Ian Goodfellow" ], "title": "Unrestricted adversarial examples", "venue": "arXiv preprint arXiv:1809.08352,", "year": 2018 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "Binarized convolutional landmark localizers for human pose estimation and face alignment with limited resources", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks)", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (S&P)", "year": 2017 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": null, "year": 2018 }, { "authors": [ "Moustapha Cisse", "Yossi Adi", "Natalia Neverova", "Joseph Keshet" ], "title": "Houdini: Fooling deep structured prediction models", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ltsc Deng", "Jinyu Li", "Jui-Ting Huang", "Kaisheng Yao", "Dong Yu", "Frank Seide", "Michael L Seltzer", "Geoffrey Zweig", "Xiaodong He", "Jason D Williams" ], "title": "Recent advances in deep learning for speech research at microsoft", "venue": "In ICASSP,", "year": 2013 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Gintare Karolina Dziugaite", "Zoubin Ghahramani", "Daniel M Roy" ], "title": "A study of the effect of jpg compression on adversarial images", "venue": "arXiv preprint arXiv:1608.00853,", "year": 2016 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Yandong Guo", "Lei Zhang", "Yuxiao Hu", "Xiaodong He", "Jianfeng Gao" ], "title": "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Seunghoon Hong", "Xinchen Yan", "Thomas S Huang", "Honglak Lee" ], "title": "Learning hierarchical semantic image manipulation through structured representations", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Gary B Huang", "Marwan Mattar", "Tamara Berg", "Eric Learned-Miller" ], "title": "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments", "venue": "In Workshop on faces in’Real-Life’Images: detection,", "year": 2008 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": null, "year": 2016 }, { "authors": [ "Justin Johnson", "Agrim Gupta", "Li Fei-Fei" ], "title": "Image generation from scene graphs", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ameya Joshi", "Amitangshu Mukherjee", "Soumik Sarkar", "Chinmay Hegde" ], "title": "Semantic adversarial attacks: Parametric transformations that fool deep classifiers", "venue": null, "year": 1904 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ira Kemelmacher-Shlizerman", "Steven M Seitz", "Daniel Miller", "Evan Brossard" ], "title": "The megaface benchmark: 1 million faces for recognition at scale", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Brendan F Klare", "Ben Klein", "Emma Taborsky", "Austin Blanton", "Jordan Cheney", "Kristen Allen", "Patrick Grother", "Alan Mah", "Anil K Jain" ], "title": "Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Xin Li", "Fuxin Li" ], "title": "Adversarial examples detection in deep networks with convolutional filter statistics", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ming-Yu Liu", "Thomas Breuel", "Jan Kautz" ], "title": "Unsupervised image-to-image translation networks", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Elman Mansimov", "Emilio Parisotto", "Jimmy Lei Ba", "Ruslan Salakhutdinov" ], "title": "Generating images from captions with attention", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Stylianos Moschoglou", "Athanasios Papaioannou", "Christos Sagonas", "Jiankang Deng", "Irene Kotsia", "Stefanos Zafeiriou" ], "title": "Agedb: the first manually collected, in-the-wild age database", "venue": "In CVPR Workshops,", "year": 2017 }, { "authors": [ "Alejandro Newell", "Kaiyu Yang", "Jia Deng" ], "title": "Stacked hourglass networks for human pose estimation", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In ICML. JMLR,", "year": 2017 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "In Security and Privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Omkar M Parkhi", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep face recognition", "venue": "In bmvc,", "year": 2015 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Scott Reed", "Kihyuk Sohn", "Yuting Zhang", "Honglak Lee" ], "title": "Learning to disentangle factors of variation with manifold interaction", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": null, "year": 2016 }, { "authors": [ "Christos Sagonas", "Georgios Tzimiropoulos", "Stefanos Zafeiriou", "Maja Pantic" ], "title": "300 faces in-thewild challenge: The first facial landmark localization challenge", "venue": "In ICCV Workshop,", "year": 2013 }, { "authors": [ "Soumyadip Sengupta", "Jun-Cheng Chen", "Carlos Castillo", "Vishal M Patel", "Rama Chellappa", "David W Jacobs" ], "title": "Frontal to profile face verification in the wild", "venue": "In WACV,", "year": 2016 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yi Sun", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face representation from predicting 10,000 classes", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Guanhong Tao", "Shiqing Ma", "Yingqi Liu", "Xiangyu Zhang" ], "title": "Attacks meet interpretability: Attributesteered detection of adversarial samples", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": null, "year": 2016 }, { "authors": [ "Hao Wang", "Yitong Wang", "Zheng Zhou", "Xing Ji", "Dihong Gong", "Jingchao Zhou", "Zhifeng Li", "Wei Liu" ], "title": "Cosface: Large margin cosine loss for deep face recognition", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Ruizhi Deng", "Bo Li", "Fisher Yu", "Mingyan Liu", "Dawn Song" ], "title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Bo Li", "Jun-Yan Zhu", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Generating adversarial examples with adversarial networks", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Dawei Yang", "Bo Li", "Jia Deng", "Mingyan Liu" ], "title": "Meshadv: Adversarial meshes for visual recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "arXiv preprint arXiv:1704.01155,", "year": 2017 }, { "authors": [ "Xinchen Yan", "Jimei Yang", "Kihyuk Sohn", "Honglak Lee" ], "title": "Attribute2image: Conditional image generation from visual attributes", "venue": null, "year": 2016 }, { "authors": [ "Fisher Yu", "Vladlen Koltun", "Thomas Funkhouser" ], "title": "Dilated residual networks", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Han Zhang", "Tao Xu", "Hongsheng Li", "Shaoting Zhang", "Xiaogang Wang", "Xiaolei Huang", "Dimitris N Metaxas" ], "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Xingcheng Zhang", "Lei Yang", "Junjie Yan", "Dahua Lin" ], "title": "Accelerated training for massive classification via dynamic class selection", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A Efros" ], "title": "Generative visual manipulation on the natural image manifold", "venue": "In ECCV. Springer,", "year": 2016 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Xiangyu Zhu", "Zhen Lei", "Xiaoming Liu", "Hailin Shi", "Stan Z Li" ], "title": "Face alignment across large poses: A 3d solution", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Moschoglou" ], "title": "2017), and Celebrities in Frontal-Profile (CFP) dataset (Sengupta et al., 2016)", "venue": "M / benchmarks LFW AgeDB-30 CFP-FF CFP-FP", "year": 2016 }, { "authors": [ "Kemelmacher-Shlizerman" ], "title": "performance of face recognition model Klare et al", "venue": null, "year": 2016 }, { "authors": [ "Li" ], "title": "2017), we use a 3 × 3 Gaussian kernel with standard deviation 1 to smooth the adversarial", "venue": null, "year": 2017 }, { "authors": [], "title": "2016), it leverages the compression and decompression", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have demonstrated great successes in advancing the state-of-the-art performance of discriminative tasks (Krizhevsky et al., 2012; Goodfellow et al., 2016; He et al., 2016; Collobert & Weston, 2008; Deng et al., 2013; Silver et al., 2016). However, recent research found that DNNs are vulnerable to adversarial examples which are carefully crafted instances aiming to induce arbitrary prediction errors for learning systems. Such adversarial examples containing small magnitude of perturbation have shed light on understanding and discovering potential vulnerabilities of DNNs (Szegedy et al., 2013; Goodfellow et al., 2014b; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Carlini & Wagner, 2017; Xiao et al., 2018b;c;a; 2019). Most existing work focused on constructing adversarial examples by adding Lp bounded pixel-wise perturbations (Goodfellow et al., 2014b) or spatially transforming the image (Xiao et al., 2018c; Engstrom et al., 2017) (e.g., in-plane rotation or out-of-plane rotation). Generating unrestricted perturbations with semantically meaningful patterns is an important yet under-explored field.\nAt the same time, deep generative models have demonstrated impressive performance in learning disentangled semantic factors through data generation in an unsupervised (Radford et al., 2015; Karras et al., 2018; Brock et al., 2019) or weakly-supervised manner based on semantic attributes (Yan et al., 2016; Choi et al., 2018). Empirical findings in (Yan et al., 2016; Zhu et al., 2016a; Radford et al., 2015) demonstrated that a simple linear interpolation on the learned image manifold can produce smooth visual transitions between a pair of input images.\nIn this paper, we introduce a novel attack SemanticAdv which generates unrestricted perturbations with semantically meaningful patterns. Motivated by the findings mentioned above, we leverage an attribute-conditional image editing model (Choi et al., 2018) to synthesize adversarial examples by interpolating between source and target images in the feature-map space. Here, we focus on changing a single attribute dimension to achieve adversarial goals while keeping the generated adversarial image reasonably-looking (e.g., see Figure 1). To validate the effectiveness of the proposed attack method, we consider two tasks, namely, face verification and landmark detection, as face recognition field has been extensively explored and the commercially used face models are relatively robust\n+blonde hair\nAdversarial Image\nSynthesized Image Target Image\nMr. Bob\nMr. BobAttribute-conditional Image Generator\nIdentity VerificationOriginal Image\nMiss Alice\nReconstruction via Generation\nOriginal Attribute\nAugmented Attribute\nAttribute-conditional Image Editing via Generation\nFeature-map Interpolation\nAdversarial Image\nOriginal Image\nTarget Image\nSemanticAdv +pale skin\nFigure 1: Left: Overview of the proposed SemanticAdv. Right: Illustration of our SemanticAdv in the real world face verification platform. Note that the confidence denotes the likelihood that two faces belong to the same person.\nsince they require a low false positive rate. We conduct both qualitative and quantitative evaluations on CelebA dataset (Liu et al., 2015). To demonstrate the applicability of SemanticAdv beyond face domain, we further extend SemanticAdv to generate adversarial street-view images. We treat semantic layouts as input attributes and use the image editing model (Hong et al., 2018) pre-trained on Cityscape dataset (Cordts et al., 2016). Please find more visualization results on the anonymous website: https://sites.google.com/view/generate-semantic-adv-example.\nThe contributions of the proposed SemanticAdv are three-folds. First, we propose a novel semanticbased attack method to generate unrestricted adversarial examples by feature-space interpolation. Second, the proposed method is able to generate semantically-controllable perturbations due to the attribute-conditioned modeling. This allows us to analyze the robustness of a recognition system against different types of semantic attacks. Third, as a side benefit, the proposed attack exhibits high transferability and leads to a 65% query-free black-box attack success rate on a real-world face verification platform, which outperforms the pixel-wise perturbations in attacking existing defense methods." }, { "heading": "2 RELATED WORK", "text": "Semantic image editing. Semantic image synthesis and manipulation is a popular research topic in machine learning, graphics and vision. Thanks to recent advances in deep generative models (Kingma & Welling, 2014; Goodfellow et al., 2014a; Oord et al., 2016) and the empirical analysis of deep classification networks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015), past few years have witnessed tremendous breakthroughs towards high-fidelity pure image generation (Radford et al., 2015; Karras et al., 2018; Brock et al., 2019), attribute-to-image generation (Yan et al., 2016; Choi et al., 2018), text-to-image generation (Mansimov et al., 2015; Reed et al., 2016; Van den Oord et al., 2016; Odena et al., 2017; Zhang et al., 2017; Johnson et al., 2018), and imageto-image translation (Isola et al., 2017; Zhu et al., 2017; Liu et al., 2017; Wang et al., 2018b; Hong et al., 2018).\nAdversarial examples. Generating Lp bounded adversarial perturbation has been extensively studied recently (Szegedy et al., 2013; Goodfellow et al., 2014b; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Carlini & Wagner, 2017; Xiao et al., 2018b). To further explore diverse adversarial attacks and potentially help inspire defense mechanisms, it is important to generate the socalled “unrestricted\" adversarial examples which contain unrestricted magnitude of perturbation while still preserve perceptual realism Brown et al. (2018). Recently, Xiao et al. (2018c); Engstrom et al. (2017) propose to spatially transform the image patches instead of adding pixel-wise perturbation, while such spatial transformation does not consider semantic information. Our proposed semanticAdv focuses on generating unrestricted perturbation with semantically meaningful patterns guided by visual attributes.\nRelevant to our work, Song et al. (2018) proposed to synthesize adversarial examples with an unconditional generative model. Bhattad et al. (2019) studied semantic transformation in only the color or texture space. Compared to these works, semanticAdv is able to generate adversarial examples in a controllable fashion using specific visual attributes by performing manipulation in the\nfeature space. We further analyze the robustness of the recognition system by generating adversarial examples guided by different visual attributes. Concurrent to our work, Joshi et al. (2019) proposed to generate semantic-based attacks against a restricted binary classifier, while our attack is able to mislead the model towards arbitrary adversarial targets. They conduct the manipulation within the attribution space which is less flexible and effective than our proposed feature-space interpolation." }, { "heading": "3 SEMANTIC ADVERSARIAL EXAMPLES", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "LetM be a machine learning model trained on a dataset D = {(x,y)} consisting of image-label pairs, where x ∈ RH×W×DI and y ∈ RDL denote the image and the ground-truth label, respectively. Here, H , W , DI , andDL denote the image height, image width, number of image channels, and label dimensions, respectively. For each image x, our modelM makes a prediction ŷ =M(x) ∈ RDL . Given a target image-label pair (xtgt,ytgt) and y 6= ytgt, a traditional attacker aims to synthesize adversarial examples {xadv} by adding pixel-wise perturbations to or spatially transforming the original image x such thatM(xadv) = ytgt. In this work, we introduce the concept of semantic attacker that aims at generating adversarial examples by adding semantically meaningful perturbation with a conditional generative model G. Compared to traditional attacker that usually produces pixel-wise perturbations, the proposed method is able to produce semantically meaningful perturbations.\nSemantic image editing. For simplicity, we start with the formulation where the input attribute is represented as a compact vector. This formulation can be directly extended to other input attribute formats including semantic layouts. Let c ∈ RDC be an attribute representation reflecting the semantic factors (e.g., expression or hair color of a portrait image) of image x, where DC indicates the attribute dimension and ci ∈ {0, 1} indicates the appearance of i-th attribute. Here, our goal is to use the conditional generator for semantic image editing. For example, given a portrait image of a girl with black hair and blonde hair as the new attribute, our generator is supposed to synthesize a new image that turns the girl’s hair from black to blonde. More specifically, we denote the new attribute as cnew ∈ RDC such that the synthesized image is given by xnew = G(x, cnew). In the special case when there is no attribute change (c = cnew), the generator simply reconstructs the input: x = G(x, c). Supported by the findings mentioned in (Bengio et al., 2013; Reed et al., 2014), our synthesized image xnew should fall close to the data manifold if we constrain the change of attribute values to be sufficiently small (e.g., we only update one semantic attribute at a time). In addition, we can potentially generate many such images by linearly interpolating between the semantic embeddings of the conditional generator G using original image x and the synthesized image xnew with the augmented attribute.\nAttribute-space interpolation. We start with a simple solution (detailed in Eq. 1) assuming the adversarial example can be found by directly interpolating in the attribute-space. Given a pair of attributes c and cnew, we introduce an interpolation parameter α ∈ (0, 1) to generate the augmented attribute vector c∗ ∈ RDC (see Eq. 1). Given augmented attribute c∗ and original image x, we produce the synthesized image by the generator G. For our notation purpose, we also introduce a delegated function TG as a re-parametrization for the generator G. Our formulation is also supported by the empirical results on attribute-conditioned image progression (Yan et al., 2016; Radford et al., 2015) that a well-trained generative model has the capability to synthesize a sequence of images with smooth attribute transitions.\nxadv = argminαL(TG(α;x, c, cnew)) where TG(α;x, c, cnew) = G(x, c∗) and c∗ = α · c + (1− α) · cnew (1)\nFeature-map interpolation. Alternatively, we propose to interpolate using the feature map produced by the generator G = Gdec ◦ Genc. Here, Genc is the encoder module that takes the image as input and outputs the feature map. Similarly, Gdec is the decoder module that takes the feature map as input and outputs the synthesized image. Let f∗ = Genc(x, c) ∈ RHF×WF×CF be the feature map of an intermediate layer in the generator, where HF , WF and CF indicate the height, width, and\nnumber of channels in the feature map.\nxadv = argminαL(TG(α;x, c, cnew)) where TG(α;x, c, cnew) = Gdec(f∗), f∗ = α Genc(x, c) + (1− α) Genc(x, cnew) (2)\nCompared to attribute-space interpolation which is parameterized by a scalar, we parameterize feature-map interpolation by a tensor α ∈ RHF×WF×CF (αh,w,k ∈ (0, 1), where 1 ≤ h ≤ HF , 1 ≤ w ≤ WF , and 1 ≤ k ≤ CF ) with the same shape as the feature map. Compared to linear interpolation over attribute-space, such design introduces more flexibility when interpolating between the original image and the synthesized image. Empirical results in Section 4.2 show our design is critical to the adversarial attack success rate." }, { "heading": "3.2 ADVERSARIAL OPTIMIZATION OBJECTIVES", "text": "As we see in Eq. 3, we obtain the adversarial image xadv by minimizing the objective L(·) with respect to the synthesized image TG(α;x, c, cnew), which is defined in Eq.(1) and Eq.(2) respectively. Here, each synthesized image TG(α;x, c, cnew) is produced by the interpolation using the conditional generator G. In our objective function, the first term is the adversarial metric, the second term is a smoothness constraint, and λ is used to control the balance between the two terms. The adversarial metric is minimized once the modelM has been successfully attacked towards the target image-label pair (xtgt,ytgt). For identify verification, ytgt is the identity representation of the target image; For structured prediction tasks in our paper, ytgt either represents certain coordinates (landmark detection) or semantic label maps (semantic segmentation).\nxadv = argminαL(TG(α;x, c, cnew)) L(TG(α;x, c, cnew)) = Ladv(TG(α;x, c, cnew);M,ytgt) + λ · Lsmooth(α) (3)\nIdentity verification. In the identity verification task, two images are considered to be the same identity if the corresponding identity embeddings from the verification modelM are reasonably close.\nLadv(TG(α;x, c, cnew);M,ytgt) = max ( κ,ΦidM(TG(α;x, c, cnew),xtgt) ) (4)\nAs we see in Eq. 4, ΦidM(·, ·) measures the distance between two identity embeddings from the model M, where the normalized L2 distance is used in our setting. In addition, we introduce the parameter κ representing the constant related to the false positive rate (FPR) threshold computed from the development set.\nStructured prediction. For structured prediction tasks such as landmark detection and semantic segmentation, we use Houdini objective proposed in Cisse et al. (2017) as our adversarial metric and select the target landmark (semantic segmentation) target as ytgt. In addition, ΦM(·, ·) is a scoring function for each image-label pair and γ is the threshold.\nLadv(TG(α;x, c, cnew);M,ytgt) =Pγ∼N (0,1)[ΦM(TG(α;x, c, cnew),y∗) − ΦM(TG(α;x, c, cnew),ytgt) < γ] · l(y∗,ytgt) (5)\nwhere y∗ = M(TG(α;x, c, cnew)) and l(y∗,ytgt) is task loss decided by the specific adversarial target.\nInterpolation smoothness Lsmooth. As the tensor to be interpolated in the feature-map space has far more parameters compared to the attribute itself, we propose to enforce a smoothness constraint on the tensor α used in feature-map interpolation. As we see in Eq. 6, the smoothness loss encourages the interpolation tensors to consist of piece-wise constant patches spatially, which has been widely used as a pixel-wise de-noising objective for natural image processing (Mahendran & Vedaldi, 2015; Johnson et al., 2016).\nLsmooth(α) = HF−1∑ h=1 WF∑ w=1 ‖αh+1,w − αh,w‖22 + HF∑ h=1 WF−1∑ w=1 ‖αh,w+1 − αh,w‖22 (6)" }, { "heading": "4 EXPERIMENTS", "text": "In the experimental section, we mainly focus on analyzing the proposed SemanticAdv in attacking state-of-the-art face recognition systems on CelebA (Liu et al., 2015) due to its wide applicability (e.g., identification for mobile payment) in the real world. In addition, we extend our attack to urban street scenes with semantic label maps as the condition. We attack the semantic segmentation model DRN-D-22 (Yu et al., 2017) previously trained on Cityscape (Cordts et al., 2016) by generating adversarial examples with dynamic objects manipulated (e.g., insert a car into the scene).\nThe experimental section is organized as follows. First, we analyze the quality of generated adversarial examples and qualitatively compare our method with Lp bounded pixel-wise optimization-based method (Carlini & Wagner, 2017; Dong et al., 2018; Xie et al., 2019). Second, we provide both qualitative and quantitative results by controlling each of the semantic attributes at a time. In terms of attack transferability, we evaluate our proposed SemanticAdv on various settings and further demonstrate the effectiveness of our method via query-free black-box attacks against online face verification platforms. Third, we compare our method with the baseline against different defense methods on the face verification task. Fourth, we demonstrate that the proposed SemanticAdv also applies to the face landmark detection and street-view semantic segmentation." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Face identity verification. We select ResNet-50 and ResNet-101 (He et al., 2016) trained on MS-Celeb-1M (Guo et al., 2016) as our face verification models. The models are trained using two different objectives, namely, softmax loss (Sun et al., 2014; Zhang et al., 2018) and cosine loss (Wang et al., 2018a). For simplicity, we use the notation “R-N-S” to indicate the model with N -layer residual blocks as backbone trained using softmax loss, while “R-N-C” indicates the same backbone trained using cosine loss. The distance between face features is measured by normalized L2 distance. For R-101-S model, we decide the parameter κ based on the false positive rate (FPR) for the identity verification task. Three different FPRs have been used: 10−3 (with κ = 1.24), 3× 10−4 (with κ = 1.05), and 10−4 (with κ = 0.60). The distance metrics and selected thresholds are commonly used when evaluating the performance of face recognition model Klare et al. (2015); Kemelmacher-Shlizerman et al. (2016). Please check the Appendix (see Table B) for more details. To distinguish between the FPR we used in generating adversarial examples and the other FPR used in evaluation, we introduce two notations “Generating FPR (G-FPR)” and “Test FPR (T-FPR)”. For the experiment with query-free black-box API attacks, we use the online face verification services provided by Face++ (fac) and AliYun (ali).\nFace landmark detection. We select Face Alignment Network (FAN) (Bulat & Tzimiropoulos, 2017b) trained on 300W-LP (Zhu et al., 2016b) and fine-tuned on 300-W (Sagonas et al., 2013) for 2D landmark detection. The network is constructed by stacking Hour-Glass network (Newell et al., 2016) with hierarchical block (Bulat & Tzimiropoulos, 2017a). Given a portrait image as input, FAN outputs 2D heatmaps which can be subsequently leveraged to yield 68 2D landmarks.\nSemantic attacks on face images. In our experiments, we randomly sample 1, 280 distinct identities form CelebA (Liu et al., 2015). To reduce the reconstruction error brought by the generator (e.g., x 6= G(x, c)) in practice, we take one more step to obtain the updated feature map f ′ = Genc(x′, c), where x′ = argminx′ ‖G(x′, c) − x‖ in feature-map interpolation. In our experiments, we use the last conv layer before upsampling in the generator as our as feature-map f given by the attack effectiveness. We also fix the parameter λ (e.g., balances the adversarial loss and smoothness constraint in Eq. 3) to be 0.01 for both face verification and landmark detection.\nWe used the StarGAN (Choi et al., 2018) for attribute-conditional image editing. In particular, we re-trained model on CelebA dataset (Liu et al., 2015) by aligning the face landmarks and then resizing images to resolution 112× 112. In addition, we select 17 identity-preserving attributes as our input condition, as such attributes related to facial expression and hair color.\nFor each distinct identity pair (x,xtgt), we perform semanticAdv guided by each of the 17 attributes (e.g., we intentionally add or remove one specific attribute while keeping the rest unchanged). In total, for each image x, we generate 17 adversarial images with different augmented attributes. In the experiments, we select a pixel-wise adversarial attack method (Carlini & Wagner, 2017) (referred as CW) as our baseline for comparison. Compared to our proposed method, CW does not require visual attributes as part of the system, as it only generates one adversarial example for each instance. We refer the corresponding attack success rate as the instance-wise success rate in which the attack\nsuccess rate is calculated for each instance. For each instance with 17 adversarial images using different augmented attributes, if one of the 17 resulting images can attack successfully, we count the attack of this instance as one success, vice verse.\nSemantic attacks on street-view images. We select DRN-D-22 (Yu et al., 2017) as our semantic segmentation model and fine-tune the model on image regions with resolution 256× 256. To synthesize semantic adversarial perturbations, we consider semantic label maps as the input attribute and leverage a generative image manipulation model (Hong et al., 2018) pre-trained on CityScape (Cordts et al., 2016) dataset. Given an input semantic label map at resolution 256× 256, we select a target object instance (e.g., a pedestrian) to attack. Then, we create a manipulated semantic label map by inserting another object instance (e.g., a car) in the vicinity of the target object. Similar to the experiments in the face domain, for both semantic label maps, we used the image manipulation encoder to extract features (with 1, 024 channels at spatial resolution 16× 16) and conducted featurespace interpolation. We synthesized the final image by feeding the interpolated features to the image manipulation decoder. By searching the interpolation coefficient that maximizes the attack rate, we are able to fool the segmentation model with the synthesized final image.\n4.2 SemanticAdv ON FACE IDENTITY VERIFICATION Attribute-space vs. feature-space interpolation. First, we found that both attribute-space and feature-space interpolation could generate reasonable samples (see Figure I in Appendix). Compared to attribute-space interpolation, generating adversarial examples with feature-space interpolation produced much better quantitative results (see Table E in Appendix). We measured the attack success rate of attribute-space interpolation (with G-FPR = T-FPR = 10−3): 0.08% on R-101-S, 0.31% on R-101-C, and 0.16% on both R-50-S and R-50-C. While feature-space interpolation achieves almost 100% success rate on all those models (see Figure 3). We conjecture that this is because the high dimensional feature space can provide more manipulation freedom.\nOverall analysis. Figure 2 shows the generated adversarial images and corresponding perturbations against R-101-S of SemanticAdv and CW respectively. The text below each figure is the name of augmented attribute, the sign before the name represents “adding” (in red) or “removing” (in blue) the corresponding attribute from the original image. We see that SemanticAdv is able to generate perceptually reasonable examples guided by the corresponding attribute. In particular, SemanticAdv is able to generate perturbations on the corresponding regions correlated with the augmented attribute, while the perturbations of CW have no specific pattern and are evenly distributed across the image.\nAnalysis: controlling single attribute. One of the key advantages of SemanticAdv is that we can generate adversarial perturbations in a more controllable fashion guided by the semantic attributes. This allows analyzing the robustness of a recognition system against different types of semantic attacks. We group the adversarial examples by augmented attributes in various settings. In Figure 3, we present the attack success rate against two face verification models, namely, R-101-S and R-101-C, guided by different attributes. We highlight the bar with light blue for G-FPR equals to 10−3 and blue for G-FPR equals to 10−4, respectively. As we see in this figure, with a larger T-FPR 10−3, our SemanticAdv can achieve almost 100% attack success rate across different attributes. With a smaller T-FPR 10−4, we find that SemanticAdv guided by some attributes such as Mouth Slightly Open and Arched Eyebrows achieve less than 50% attack success rate, while the other attributes\nsuch as Pale Skin and Eyeglasses are relatively less affected. In summary, we found that SemanticAdv guided by attributes describing the local shape (e.g., mouth, earrings) achieve a relatively lower attack success rate compared to attributes relevant to the color (e.g., hair color) or entire face region (e.g., skin). This suggests that the face verification models used in our experiments are more robustly trained in terms of detecting local shapes compared to colors. Please note that in practice we have the flexibility to select attributes for attacking an image based on the perceptual quality and attack success rate.\nFigure 4 shows the adversarial examples with augmented semantic attributes against R-101-S model. The attribute names are shown in the bottom. The upper images are G(x, cnew) generated by StarGAN with augmented attribute cnew where the lower images are the corresponding adversarial images with the same augmented attribute.\nSource Image Target Image\n+bushy eyebrows\n-wavy hair -young +eyeglasses -heavy makeup +rosy cheeks +chubby -mouth slightly open +blonde hair\n-wearing lipstick -smiling -arched eyebrows +bangs +wearing earrings +bags under eyes +receding hairline -pale skin\n12: [0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]\nwavy hair, young, heavy makeup, mouth slightly open, wearing lipstick, smiling, arched eyebrows, pale skin\n30: [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0]\nyoung, bushy eyebrows, bags under eyes\n-bushy eyebrows\n+wavy hair -young +eyeglasses +heavy makeup +rosy cheeks +chubby +mouth slightly open +blonde hair\n+wearing lipstick +smiling +arched eyebrows +bangs +wearing earrings +bags under eyes +receding hairline +pale skin\nSource Image Target Image\nSynthesized image with augmented attribute (ɑ=0)\nAdversarial image by optimizing interpolation parameter (ɑ=argminɑL(.))\nGeneration via attribute-conditioned image editing\nSynthesized image with augmented attribute (ɑ=0)\nAdversarial image by optimizing interpolation parameter (ɑ=argminɑL(.))\nFigure 4: Qualitative analysis on single-attribute adversarial attack (G-FPR = 10−3). More results are shown in Appendix (see Figure K, Figure L and Figure M).\nAnalysis: semantic attack transferability. To further understand the property of SemanticAdv, we analyze the transferability of SemanticAdv on various settings. For each model with different FPRs, we select the successfully attacked adversarial examples from Section 4.1 to construct our evaluation dataset and evaluate these adversarial samples across different models. Table 1a illustrates the transferability of SemanticAdv among different models by using the same FPRs (G-FPR = T-FPR = 10−3). Table 1b illustrates the result with different FPRs (G-FPR = 10−4 and T-FPR = 10−3) for generation and evaluation. As shown in Table 1a, adversarial examples generated against models trained with softmax loss exhibit certain transferability compared to models trained with cosine loss. We conduct the same experiment by generating adversarial examples with CW and found it has weaker transferability compared to our SemanticAdv (results in brackets of Table 1).\nAs Table 1b illustrates, the adversarial examples generated against the model with smaller G-FPR = 10−4 exhibit strong attack success rate when evaluating on the model with larger T-FPR = 10−3. Especially, we found the adversarial examples generated against R-101-S have the best attack performance on other models. These findings motivate the analysis of query-free black-box API attack detailed in the following paragraph.\nQuery-free black-box API attack. In this experiment, we generate adversarial examples against R101-S with G-FPR = 10−3(κ = 1.24), G-FPR = 10−4(κ = 0.60), and G-FPR < 10−4(κ = 0.20), respectively. We evaluate our algorithm on two industry level APIs, namely, Face++ and AliYun face verification platform. Since attack transferability has never been explored in concurrent work that generates semantic adversarial examples, we use Lp bounded pixel-wise methods (Carlini & Wagner, 2017; Dong et al., 2018; Xie et al., 2019) as our baselines. As we see Table 2, which shows the best performance of each method, our SemanticAdv achieves much higher attack success rate than CW in both APIs with all FPR thresholds (e.g., our adversarial examples generated with G-FPR < 10−4 achieves 64.63% attack success rate on Face++ platform with T-FPR = 10−3). In addition, we found that lower G-FPR can achieve higher attack success rate on APIs within the same T-FPR (see Table I in Appendix).\nUser study. To measure the perceptual quality of the adversarial images generated by SemanticAdv, we conduct a user study on Amazon Mechanical Turk (AMT). We use the adversarial examples generated with G-FPR < 10−4, which is the most strict setting in our experiment, to conduct the user study for both CW and SemanticAdv. In total, we collect 2, 620 annotations from 77 participants. In 39.14±1.96% (close to random guess 50%) of trials the adversarial images generated by SemanticAdv are selected as reasonably-looking images and in 30.27 ± 1.96% of trials, the adversarial images generated by CW are selected as reasonably-looking. It indicates that SemanticAdv can generate more reasonable-looking adversarial examples compared with CW under the most strict setting with G-FPR < 10−4. Qualitative comparisons are shown in Appendix (see Figure H).\nSemanticAdv against defense methods. We evaluate the strength of the proposed attack by testing against four existing defense methods, namely, Feature squeezing (Xu et al., 2017), Blurring (Li & Li, 2017), JPEG (Dziugaite et al., 2016) and AMI (Tao et al., 2018). For AMI (Tao et al., 2018), we first extract attribute witnesses with our aligned face images and then leverage them\nGT label maps\nPred label maps\nSource Image Source Layout\n(prediction)\nSynthesized Image with Augmented Layout (ɑ=0) Adversarial Image by\nOptimization (ɑ=argminɑL(.))Source Layout (ground-truth) Target Layout (ground-truth) Manipulated Image Manipulated Layout (prediction) Manipulated Layout (ground-truth)\nSource Image Source Layout (prediction) Source Layout (ground-truth)\nTarget Layout (ground-truth)\nSynthesized Image with Augmented Layout (ɑ=0)\nManipulated Image Manipulated Layout (prediction) Manipulated Layout (ground-truth)\nTask: pedestrian removal attack\nTask: cyclist → pedestrian attack Adversarial Image by\nOptimization (ɑ=argminɑL(.))\nFigure 7: Qualitative results on attacking street-view semantic segmentation model.\nto construct attribute-steered model. We use fc7 of pretrained VGG (Parkhi et al., 2015) as the face representation. AMI yields a consistency score for each face image to indicate whether it is a benign image. The score is measured by the cosine similarity between the representations from original model and attribute-steered model. With 10% false positives on benign inputs, it only achieves 8% detection accuracy for SemanticAdv and 12% detection accuracy for CW.\nFigure 5 illustrates SemanticAdv is more robust against these defense methods comparing with CW. The same G-FPR and T-FPR are used for evaluation. Under the condition that T-FPR is 10−3, both SemanticAdv and CW achieve high attack success rate, while SemanticAdv marginally outperforms CW when FPR goes down to 10−4. While defense methods have proven to be effective against CW attacks on classifiers trained with ImageNet (Krizhevsky et al., 2012), our results indicate that these methods are still vulnerable in face verification system with small T-FPR.\n4.3 SemanticAdv ON FACE LANDMARK DETECTION\nWe also evaluate the effectiveness of SemanticAdv on face landmark detection. We select two attack tasks, namely, “Rotating Eyes” and “Out of Region”. For the “Rotating Eyes” task, we rotate the coordinates of the eyes in the image counter-clockwise by 90◦. For the “Out of Region” task, we set a target bounding box and attempt to push all points out of the box. We summarize the experimental setup and quantitative results in the Appendix (see Table D). As we see in Figure 6, our method is applicable to attack landmark detection models.\n4.4 SemanticAdv ON STREET-VIEW SEMANTIC SEGMENTATION\nWe further demonstrate the applicability of our SemanticAdv beyond face domain by generating adversarial perturbations on street-view images. Figure 7 illustrates the adversarial examples on semantic segmentation. In the first example, we select the leftmost the pedestrian as the target object instance and insert another car into the scene to attack it. The segmentation model has been successfully attacked to neglect the pedestrian (see last column), while it does exist in the scene (see second-to-last column). In the second example, we insert an adversarial car in the scene by SemanticAdv and the cyclist has been recognized as a pedestrian by the segmentation model." }, { "heading": "5 CONCLUSIONS", "text": "Overall we presented a novel attack method SemanticAdv, which is capable of generating unrestricted adversarial perturbations guided by semantic attributes edition. Compared to existing methods, SemanticAdv works in a more controllable fashion. Experimental evaluations on face verification and landmark detection demonstrate several unique properties including attack transferability. We believe this work would open up new research opportunities and challenges in the field of adversarial learning. For instance, how to leverage semantic information to defend against such attacks will lead to potential new discussion." }, { "heading": "A FACE IDENTITY VERIFICATION", "text": "Benchmark performance. First, we provide additional information about the ResNet models we used in the experiments. We summarize in Table A the performance on several face identity verification benchmarks including Labeled Face in the Wild (LFW) dataset (Huang et al., 2008), AgeDB-30 dataset (Moschoglou et al., 2017), and Celebrities in Frontal-Profile (CFP) dataset (Sengupta et al., 2016).\nM / benchmarks LFW AgeDB-30 CFP-FF CFP-FP R-50-S 99.27 94.15 99.26 91.49\nR-101-S 99.42 95.93 99.57 95.07 R-50-C 99.38 95.08 99.24 90.24 R-101-C 99.67 95.58 99.57 92.71\nTable A: The performnace of ResNet models on several benchmark datasets.\nIdentity verification thresholds. Table B shows the threshold values used in our experiments when determining whether two portrait images belong to the same identity or not. The selected FPR thresholds and normalized L2 distance between face features are commonly used when evaluating the performance of face recognition model Klare et al. (2015); Kemelmacher-Shlizerman et al. (2016).\nFPR/M R-50-S R-101-S R-50-C R-101-C" }, { "heading": "10−3 1.181 1.244 1.447 1.469", "text": "3× 10−4 1.058 1.048 1.293 1.242 10−4 0.657 0.597 0.864 0.809\nTable B: The threshold values for face identity verification.\nImplementation details. We use Adam (Kingma & Ba, 2015) as the optimizer to generate adversarial examples for both our SemanticAdv and CW. More specifically, we run optimization for up to 200 steps with a fixed learning rate 0.05 for cases when G-FPR ≤ 10−4. Otherwise, we run optimization for up to 500 steps with a fixed learning rate 0.01. For pixel-wise attack method CW, we use additional pixel reconstruction loss with corresponding loss weight to 5. We run optimization for up to 1, 000 steps with a fixed learning rate 10−3.\nEvaluation metrics. To evaluate the performance of semanticAdv under different attributes, we consider three metrics as follows:\n• Best: if there is one attribute among 17 attributes that can be successfully attacked, we count the attack success rate for this face identity as 1;\n• Average: we calculate the average attack success rate among 17 attributes for the same face identity;\n• Worst: only when all of 17 attributes can be successfully attacked, we count the attack success rate for this person as 1;\nNote that, for a fair comparison with CW, we should use the Best metric for our SemanticAdv, as CW is the traditional pixel-wise attack method works regardless of the attribute. In addition, we report the performance using the average and worst metric, which actually provides additional insights into the robustness of face verification models across different attributes. For instance, combining the results from Table C and Figure 3, we understand that the face verification models used in our experiments have different levels of robustness across attributes. For example, face verification models are more robust against local shape variations than color variations, e.g., pale skin has higher attack success rate than mouth open. We believe these discoveries will help the community further understand the properties of face verification models.\nTable C shows the overall performance (accuracy) of face verification model and attack success rate of SemanticAdv and CW. As we can see from Table C, although the face model trained with cosine loss achieves higher face recognition performance, it is more vulnerable to adversarial attack compared with the model trained with softmax loss.\nG-FPR Metrics /M R-50-S R-101-S R-50-C R-101-C\n10−3\nVerification Accuracy 98.36 98.78 98.63 98.84 x′ 0.00 0.00 0.08 0.00 G(x′, c) 0.00 0.00 0.00 0.23 G(x′, cnew)(Best) 0.16 0.08 0.16 0.31 SemanticAdv (Best) 100.00 100.00 100.00 100.00 SemanticAdv (Worst) 91.95 93.98 99.53 99.77 SemanticAdv (Average) 98.98 99.29 99.97 99.99 CW 100.00 100.00 100.00 100.00\n3× 10−4 Verification Accuracy 97.73 97.97 97.91 97.85 x′ 0.00 0.00 0.00 0.00 G(x′, c) 0.00 0.00 0.00 0.00 G(x′, cnew)(Best) 0.00 0.00 0.00 0.00 SemanticAdv (Best) 100.00 100.00 100.00 100.00 SemanticAdv (Worst) 83.75 79.06 98.98 96.64 SemanticAdv (Average) 97.72 97.35 99.92 99.72 CW 100.00 100.00 100.00 100.00\n10−4\nVerification Accuracy 93.25 92.80 93.43 92.98 x′ 0.00 0.00 0.00 0.00 G(x′, c) 0.00 0.00 0.00 0.00 G(x′, cnew)(Best) 0.00 0.00 0.00 0.00 SemanticAdv (Best) 100.00 100.00 100.00 100.00 SemanticAdv (Worst) 33.59 19.84 67.03 48.67 SemanticAdv (Average) 83.53 76.64 95.57 91.13 CW 100.00 100.00 100.00 100.00\nTable C: Quantitative result of identity verification (%). It shows accuracy of face verification model and attack success rate of SemanticAdv and CW. x′, G(x′, c) and G(x′, cnew) are the intermediate results of our method before adversarial perturbation.\nDefense methods. Feature squeezing (Xu et al., 2017) is a simple but effective method by reducing color bit depth to remove the adversarial effects. We compress the image represented by 8 bits for each channel to 4 bits for each channel to evaluate the effectiveness. For Blurring (Li & Li, 2017), we use a 3 × 3 Gaussian kernel with standard deviation 1 to smooth the adversarial perturbations. For JPEG Dziugaite et al. (2016), it leverages the compression and decompression to remove the adversarial perturbation. We set the compression ratio as 0.75 in our experiment." }, { "heading": "B FACE LANDMARK DETECTION", "text": "Implementation details. To perform attack on the face landmark detection model, we run optimization for up to 2, 000 steps with a fixed learning rate 0.05. We set the balancing factor λ (see Eq. 3) to value 0.01 for this experiment.\nEvaluation Metrics. We apply two different metrics for two adversarial attack tasks respectively. For “Rotating Eyes” task, we use a well-adopted metric Normalized Mean Error (NME) (Bulat & Tzimiropoulos, 2017b) for experimental evaluation.\nrNME = 1\nN N∑ k=1 ||pk − p̂k||2√ WB ∗HB , (7)\nwhere pk denotes the k-th ground-truth landmark, p̂k denotes the k-th predicted landmark and√ WB ∗HB is the square-root area of ground-truth bounding box, where WB and HB represents the width and height of the box.\nFor “Out of Region” task, we consider the attack is successful if the landmark predictions fall outside a pre-defined centering region on the portrait image. Thus, we introduce a metric that reflects the portion of landmarks outside of the pre-defined centering region: rOUT = NoutNtotal , where Nout denotes the number of predicted landmarks outside the pre-defined bounding box and Ntotal denotes the total number of landmarks.\nTasks (Metrics) Pristine Augmented AttributesBlond Hair Young Eyeglasses Rosy Cheeks Smiling Arched Eyebrows Bangs Pale Skin\nRotating eyes (rNME) ↓ 28.04 14.03 17.28 8.58 13.24 19.21 23.42 15.99 10.72 Out-of-region (rOUT) ↓ 45.98 17.42 23.04 7.51 16.65 25.44 33.85 20.03 13.51\nTable D: Quantitative results on face landmark detection (%) The two row shows the measured ratios (lower is better) for “Rotating Eyes” and “Out Of Region” task, respectively.\nWe present the quantitative results of SemanticAdv on face landmark detection model in Table D including two adversarial tasks, namely, “Out of Region” and “Rotating Eyes”. We observe that our method is efficient to perform attacking on landmark detection models. For certain attributes such as “Eyeglasses”, “Plae Skin”, SemanticAdv is able to achieve reasonably-good performance." }, { "heading": "C USER STUDY", "text": "We conducted a user study on the adversarial images of SemanticAdv and CW used in the experiment of API-attack and the original images. The adversarial images are generated with G-FPR< 10−4 for both methods. We present a pair of original image and adversarial image to participants and ask them to rank the two options. The order of these two images is randomized and the images are displayed for 2 seconds in the screen during each trial. After the images disappear, the participants have unlimited time to select the more reasonably-looking image according to their perception. For each participant, one could only conduct at most 50 trials, and each adversarial image was shown to 5 different participants. Some qualitative results are shown in Figure H. In total, we collect 2, 620 annotations from 77 participants. In 39.14 ± 1.96% of trials the adversarial images generated by SemanticAdv are selected as reasonably-looking images and in 30.27±1.96% of trails, the adversarial images generated by CW are selected as reasonably-looking images. It indicates that our semantic adversarial examples are more perceptual reasonably-looking than CW. Additionally, we also conduct the user study with larger G-FPR= 10−3. In 45.42±1.96% of trials, the adversarial images generated by SemanticAdv are selected as reasonably-looking images, which is very close to the random guess (50%).\nId:31, +Rosy Cheeks\nId:129, +Rosy Cheeks\nId:83, +Rosy Cheeks\nId:55, +Rosy Cheeks\nId:20, +Rosy Cheeks\nId:28, (middle column) +Smiling\nId:13, +Smiling\nId:118, +Smiling\nId:122, +Smiling\nId:92, +Smiling\nId:141, +Pale Skin\nId:30, +Pale Skin\nId:165, +Pale Skin\nId:86, +Pale Skin\nId:134,, +Pale Skin\nGT | CW | our SemanticAdv\nGT CW SemanticAdv GT CW SemanticAdv GT CW SemanticAdv\n+Rosy Cheeks +Smiling +Pale Skin\nFigure H: Qualitative comparisons among ground truth, pixel-wise adversarial examples generated by CW, and our proposed SemanticAdv. Here, we present the results from G-FPR < 10−4 so that perturbations are visible." }, { "heading": "D ABLATION STUDY: FEATURE-SPACE INTERPOLATION", "text": "We conduct an ablation study on feature-space interpolation by analyzing attack success rates with different feature-maps in the StarGAN network. Table E shows the attack success rate on R-101-S. Here, we use fi to represent the feature-map after i-th up-sampling operation. f0 denotes the featuremap before applying up-sampling operation. The result demonstrates that samples generated by interpolating on f0 achieve the highest success rate. Since f0 is the feature-map before decoder, it still well embeds semantic information in the feature space. We adopt f0 for interpolation in our experiments.\nWe also conduct a qualitative comparison between attribute-space and feature-space interpolation. As shown in Figure I, images generated by attribute-space and feature-space interpolation are both reasonably-looking.\nT-FPR (G-FPR) 10−3 (10−3) 3× 10−4 (3× 10−4) 10−4 (10−4) Layer (f) f0 f1 f2 f0 f1 f2 f0 f1 f2 Attack Success Rate 99.29 98.32 75.62 97.35 94.10 57.15 76.64 67.40 19.63\nTable E: Attack success rate by selecting different layer’s feature-map for interpolation on R-101-S(%). fi indicates the feature-map after i-th up-sampling operation.\nT-FPR (G-FPR) 10−3 (10−3) 3× 10−4 (3× 10−4) 10−4 (10−4) Layer (f) f−2 f−1 f−2 f−1 f−2 f−1 Attack Success Rate 49.40 92.09 30.44 81.87 6.66 45.46\nTable F: Attack success rate by selecting different layer’s feature-map for interpolation on R-101S(%). f−2 indicates the feature-map after the last down-sampling operation and f−2 indicates the feature-map after f−2." }, { "heading": "E SEMANTIC ATTACK TRANSFERABILITY", "text": "In Table G, we present the quantitative results of the transferability with G-FPR = 10−4, T-FPR = 10−4. We observe that with the strict criterion (Lower T-FPR) of the verification model, the transferability becomes lower cross different models.\nTo explore whether the improvement of transferability in SemanticAdv is introduced by semantic editing rather than optimized semantic perturbation in feature space. We add a StrawMan [CW + xnew] baseline which uses a controllable semantic-attribute-based generator to generate semantically different images without any notion of an adversarial attack, and then applies standard Lp CW attacks on that generated image. The results are shown in Table H. The performance of the StrawMan [CW + xnew] is worse than SemanticAdv. This result justifies that our SemanticAdv is able to produce novel adversarial examples which cannot be simply achieved by combining attribute-conditional image editing model with Lp bounded perturbation.\nMtest /Mopt R-50-S R-101-S R-50-C R-101-C R-50-S 1.000 0.005 0.000 0.000 R-101-S 0.000 1.000 0.000 0.000 R-50-C 0.000 0.000 1.000 0.000 R-101-C 0.000 0.000 0.000 1.000\nTable G: Transferability of SemanticAdv: cell (i, j) shows attack success rate of adversarial examples generated against j-th model and evaluate on i-th model. Results generated with G-FPR = 10−4, T-FPR = 10−4.\nMtest /Mopt R-101-S R-50-S 0.035 (0.108) R-101-S 1.000 (1.000) R-50-C 0.145 (0.202) R-101-C 0.085 (0.236)\n(a) G-FPR=10−3, T-FPR=10−3\nMtest /Mopt R-101-S R-50-S 0.615 (0.862) R-101-S 1.000 (1.000) R-50-C 0.570 (0.837) R-101-C 0.695 (0.888)\n(b) G-FPR=10−4, T-FPR=10−3\nTable H: Transferability of StrawMan: cell (i, j) shows attack success rate of adversarial examples generated against j-th model and evaluate on i-th model. Results of SemanticAdv are listed in brackets." }, { "heading": "F QUERY-FREE BLACK-BOX API ATTACK", "text": "In Table I, we present the results of SemanticAdv performing query-free black-box attack on two online face verification platforms. SemanticAdv outperforms CW and StawnMan in both APIs under all FPR thresholds. In addition, we achieve higher attack success rate on APIs using samples generated using lower G-FPR compared to samples generated using higher G-FPR with the same T-FPR. Original x and generated xnew are regarded as reference point of the performance of online face verification platforms.\nFigure J illustrates our SemanticAdv on attacking Microsoft Azure face verification system, which further demonstrate the effectiveness of our approach.\nFigure J: Illustration of our SemanticAdv in the real world face verification platform (editing on pale skin). Note that the confidence denotes the likelihood that two faces belong to the same person." }, { "heading": "G MORE VISUALIZATIONS", "text": "Figure K: Qualitative analysis on single-attribute adversarial attack (G-FPR = 10−3).\nFigure L: Qualitative analysis on single-attribute adversarial attack (G-FPR = 10−3).\nFigure M: Qualitative analysis on single-attribute adversarial attack (G-FPR = 10−3).\n15\nSemantic Adversarial Examples\n+eyeglasses +bangs\n12\n+blonde hair-young -wearing lipsticks\n30\n+rosy cheeks+smiling +receding hairline\nSource Image Target Image\n+arched eyebrows\nAdversarial Examples using CW\n-bushy eyebrows\n-pale skin\n-young\n69\n+blonde hair\n-receding hairline+rosy cheeks +bushy eyebrows\n4\n+eyeglasses +heavy makeup +bags under eyes +pale skin\n10\n+bangs +mouth slightly open +rosy cheeks+chubby\n28\n+wearing lipsticks +eyeglasses+arched eyebrows\n-smiling\n24\n+bangs -smiling +pale skin +blonde hair\n100-500 +eyeglasses +bangs\n+blonde hair\n-young\n+rosy cheeks +pale skin\n-wearing earrings\n+receding hairline\n+smiling\n+eyeglasses\n+heavy makeup\n+bags under eyes +pale skin\n+chubby\n+young\n+arched eyebrows\n-smiling\n-bangs -blonde hair\n+wearing lipsticks\n-mouth slightly open\n-rosy cheeks+mouth slightly open\n-young\n+bangs\n-mouth slightly open +receding hairline\n+bushy eyebrows +arched eyebrows +rosy cheeks\n+mouth slightly open +pale skin\n100\n132\n134\n152\n154\n189\n247\n249\nFigure N: Qualitative comparisons between our proposed SemanticAdv (G-FPR = 10−3) and pixelwise adversarial examples generated by CW. Along with the adversarial examples, we also provide the corresponding perturbations (residual) on the right.\nFigure O: Qualitative analysis on single-attribute adversarial attack (SemanticAdv with G-FPR = 10−3) by each other. Along with the adversarial examples, we also provide the corresponding perturbations (residual) on the right." } ]
2,019
SEMANTICADV: GENERATING ADVERSARIAL EXAM-
SP:e84523133b0c393a7d673a3faef8cd2d6368830a
[ "The paper proposes to learn an energy based generative model using an ‘annealed’ denoising score matching objective. The main contribution of the paper is to show that denoising score matching can be trained on a range of noise scales concurrently using a small modification to the loss. Compared to approximate likelihood learning of Energy based models the key benefit is to sidestep the need for sampling from the model distribution which has proven to be very challenging in practice. Using a slightly modified Langevin Sampler the paper further demonstrated encouraging sample qualities on CIFAR10 as measured by FID and IS scores. ", "This paper presents a method of learning of energy based models using denoising score matching. This technique has been used before but only with limited success. The authors hypothesize that this is due to the fact that the matching was only performed over a single noise scale. The main idea of this work is to employ a range of scales to learn a single energy function. This trick helps to alleviate the problem of noisy samples concentrating in a low-volume region of the ambient space." ]
Energy-Based Models (EBMs) assign unnormalized log-probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning, and many more. But training of EBMs using standard maximum likelihood is extremely slow because it requires sampling from the model distribution. Score matching potentially alleviates this problem. In particular, denoising score matching (Vincent, 2011) has been successfully used to train EBMs. Using noisy data samples with one fixed noise level, these models learn fast and yield good results in data denoising (Saremi and Hyvarinen, 2019). However, demonstrations of such models in high quality sample synthesis of high dimensional data were lacking. Recently, Song and Ermon (2019) have shown that a generative model trained by denoising score matching accomplishes excellent sample synthesis, when trained with data samples corrupted with multiple levels of noise. Here we provide analysis and empirical evidence showing that training with multiple noise levels is necessary when the data dimension is high. Leveraging this insight, we propose a novel EBM trained with multi-scale denoising score matching. Our model exhibits data generation performance comparable to state-of-the-art techniques such as (Song and Ermon, 2019) and GANs, and sets a new baseline for EBMs. The proposed model also provides density information and performs well in an image inpainting task.
[]
[ { "authors": [ "Shane Barratt", "Rishi Sharma" ], "title": "A note on the inception score", "venue": "arXiv preprint arXiv:1801.01973,", "year": 2018 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "arXiv preprint arXiv:1711.05136,", "year": 2017 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Accurate and conservative estimates of mrf log-likelihood using reverse annealing", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "B Chandra", "Rajesh Kumar Sharma" ], "title": "Adaptive noise schedule for denoising autoencoder", "venue": "In International conference on neural information processing,", "year": 2014 }, { "authors": [ "Ricky TQ Chen", "Jens Behrmann", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "arXiv preprint arXiv:1906.02735,", "year": 2019 }, { "authors": [ "Hyunsun Choi", "Eric Jang" ], "title": "Generative ensembles for robust anomaly detection", "venue": "arXiv preprint arXiv:1810.01392,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: non-linear independent components estimation", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Double backpropagation increasing generalization performance", "venue": "In IJCNN-91-Seattle International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689,", "year": 2019 }, { "authors": [ "Fenglei Fan", "Wenxiang Cong", "Ge Wang" ], "title": "A new type of neurons for machine learning. International journal for numerical methods in biomedical engineering, 34(2):e2920, 2018", "venue": null, "year": 2018 }, { "authors": [ "Stuart Geman", "Donald Geman" ], "title": "Stochastic relaxation, gibbs distributions, and the bayesian restoration of images", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1984 }, { "authors": [ "Krzysztof J. Geras", "Charles A. Sutton" ], "title": "Scheduled denoising autoencoders", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Tuomas Haarnoja", "Haoran Tang", "Pieter Abbeel", "Sergey Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Richard Jordan", "David Kinderlehrer", "Felix Otto" ], "title": "The variational formulation of the fokker– planck equation", "venue": "SIAM journal on mathematical analysis,", "year": 1998 }, { "authors": [ "Yan Karklin", "Eero P Simoncelli" ], "title": "Efficient coding of natural images with a population of noisy linear-nonlinear neurons", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Diederik P. Kingma", "Yann LeCun" ], "title": "Regularized estimation of image statistics by score matching", "venue": "In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Rithesh Kumar", "Anirudh Goyal", "Aaron Courville", "Yoshua Bengio" ], "title": "Maximum entropy generators for energy-based models", "venue": "arXiv preprint arXiv:1901.08508,", "year": 2019 }, { "authors": [ "Neil Lawrence" ], "title": "Probabilistic non-linear principal component analysis with gaussian process latent variable models", "venue": "Journal of machine learning research,", "year": 2005 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Eric T. Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Görür", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "Annealed importance sampling", "venue": "Statistics and computing,", "year": 2001 }, { "authors": [ "Radford M Neal" ], "title": "Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo", "venue": null, "year": 2011 }, { "authors": [ "Jiquan Ngiam", "Zhenghao Chen", "Pang W Koh", "Andrew Y Ng" ], "title": "Learning deep energy models", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Tian Han", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "On the anatomy of mcmcbased maximum likelihood learning of energy-based models", "venue": null, "year": 1903 }, { "authors": [ "Georg Ostrovski", "Will Dabney", "Rémi Munos" ], "title": "Autoregressive quantile networks for generative modeling", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emanuel Parzen" ], "title": "On estimation of a probability density function and mode", "venue": "The Annals of Mathematical Statistics,", "year": 1962 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Stuart J Russell", "Peter Norvig" ], "title": "Artificial intelligence: a modern approach", "venue": "Malaysia; Pearson Education Limited,,", "year": 2016 }, { "authors": [ "Ruslan Salakhutdinov", "Iain Murray" ], "title": "On the quantitative analysis of deep belief networks", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Saeed Saremi", "Aapo Hyvarinen" ], "title": "Neural empirical bayes", "venue": "arXiv preprint arXiv:1903.02334,", "year": 2019 }, { "authors": [ "Saeed Saremi", "Arash Mehrjou", "Bernhard Schölkopf", "Aapo Hyvärinen" ], "title": "Deep energy estimator networks", "venue": "arXiv preprint arXiv:1805.08306,", "year": 2018 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "arXiv preprint arXiv:1907.05600,", "year": 2019 }, { "authors": [ "Yang Song", "Sahaj Garg", "Jiaxin Shi", "Stefano Ermon" ], "title": "Sliced score matching: A scalable approach to density and score estimation", "venue": "In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Terence Tao" ], "title": "Topics in random matrix theory, volume 132", "venue": "American Mathematical Soc.,", "year": 2012 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Tijmen Tieleman" ], "title": "Training restricted boltzmann machines using approximations to the likelihood gradient", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Roman Vershynin" ], "title": "High-dimensional probability: An introduction with applications in data science, volume 47", "venue": null, "year": 2018 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Martin J Wainwright", "Eero P Simoncelli" ], "title": "Scale mixtures of gaussians and the statistics of natural images", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Shuangfei Zhai", "Yu Cheng", "Weining Lu", "Zhongfei Zhang" ], "title": "Deep structured energy based models for anomaly detection", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Qianjun Zhang", "Lei Zhang" ], "title": "Convolutional adaptive denoising autoencoders for hierarchical feature extraction", "venue": "Frontiers of Computer Science,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION AND MOTIVATION", "text": "Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems, such as error correction/denoising (Vincent et al., 2010), outlier/novelty detection (Zhai et al., 2016; Choi and Jang, 2018), sample generation (Nijkamp et al., 2019; Du and Mordatch, 2019), invariant pattern recognition, Bayesian reasoning (Welling and Teh, 2011) which relies on good data priors, and many others.\nEnergy-Based Models (EBMs) (LeCun et al., 2006; Ngiam et al., 2011) assign an energy E(x) to each data point x which implicitly defines a probability by the Boltzmann distribution pm(x) = e−E(x)/Z. Sampling from this distribution can be used as a generative process that yield plausible samples of x. Compared to other generative models, like GANs (Goodfellow et al., 2014), flowbased models (Dinh et al., 2015; Kingma and Dhariwal, 2018), or auto-regressive models (van den Oord et al., 2016; Ostrovski et al., 2018), energy-based models have significant advantages. First, they provide explicit (unnormalized) density information, compositionality (Hinton, 1999; Haarnoja et al., 2017), better mode coverage (Kumar et al., 2019) and flexibility (Du and Mordatch, 2019). Further, they do not require special model architecture, unlike auto-regressive and flow-based models. Recently, Energy-based models has been successfully trained with maximum likelihood (Nijkamp et al., 2019; Du and Mordatch, 2019), but training can be very computationally demanding due to the need of sampling model distribution. Variants with a truncated sampling procedure have been proposed, such as contrastive divergence (Hinton, 2002). Such models learn much faster with the draw back of not exploring the state space thoroughly (Tieleman, 2008)." }, { "heading": "1.1 SCORE MATCHING, DENOISING SCORE MATCHING AND DEEP ENERGY ESTIMATORS", "text": "Score matching (SM) (Hyvärinen, 2005) circumvents the requirement of sampling the model distribution. In score matching, the score function is defined to be the gradient of log-density or the negative energy function. The expected L2 norm of difference between the model score function and the data score function are minimized. One convenient way of using score matching is learning the energy function corresponding to a Gaussian kernel Parzen density estimator (Parzen, 1962) of the data: pσ0(x̃) = ∫ qσ0(x̃|x)p(x)dx. Though hard to evaluate, the data score is well defined: sd(x̃) = ∇x̃ log(pσ0(x̃)), and the corresponding objective is:\nLSM (θ) = Epσ0(x̃) ‖ ∇x̃ log(pσ0(x̃)) +∇x̃E(x̃; θ) ‖ 2 (1)\nVincent (2011) studied the connection between denoising auto-encoder and score matching, and proved the remarkable result that the following objective, named Denoising Score Matching (DSM), is equivalent to the objective above:\nLDSM (θ) = Epσ0 (x̃,x) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖ 2 (2)\nNote that in (2) the Parzen density score is replaced by the derivative of log density of the single noise kernel ∇x̃ log(qσ0(x̃|x)), which is much easier to evaluate. In the particular case of Gaussian noise, log(qσ0(x̃|x)) = −\n(x̃−x)2 2σ20 + C, and therefore:\nLDSM (θ) = Epσ0(x̃,x) ‖ x − x̃ + σ0 2∇x̃E(x̃; θ) ‖2 (3)\nThe interpretation of objective (3) is simple, it forces the energy gradient to align with the vector pointing from the noisy sample to the clean data sample. To optimize an objective involving the derivative of a function defined by a neural network, Kingma and LeCun (2010) proposed the use of double backpropagation (Drucker and Le Cun, 1991). Deep energy estimator networks (Saremi et al., 2018) first applied this technique to learn an energy function defined by a deep neural network. In this work and similarly in Saremi and Hyvarinen (2019), an energy-based model was trained to match a Parzen density estimator of data with a certain noise magnitude. The previous models were able to perform denoising task, but they were unable to generate high-quality data samples from a random input initialization. Recently, Song and Ermon (2019) trained an excellent generative model by fitting a series of score estimators coupled together in a single neural network, each matching the score of a Parzen estimator with a different noise magnitude.\nThe questions we address here is why learning energy-based models with single noise level does not permit high-quality sample generation and what can be done to improve energy based models. Our work builds on key ideas from Saremi et al. (2018); Saremi and Hyvarinen (2019); Song and Ermon (2019). Section 2, provides a geometric view of the learning problem in denoising score matching and provides a theoretical explanation why training with one noise level is insufficient if the data dimension is high. Section 3 presents a novel method for training energy based model, Multiscale Denoising Score Matching (MDSM). Section 4 describes empirical results of the MDSM model and comparisons with other models." }, { "heading": "2 A GEOMETRIC VIEW OF DENOISING SCORE MATCHING", "text": "Song and Ermon (2019) used denoising score matching with a range of noise levels, achieving great empirical results. The authors explained that large noise perturbation are required to enable the learning of the score in low-data density regions. But it is still unclear why a series of different noise levels are necessary, rather than one single large noise level. Following Saremi and Hyvarinen (2019), we analyze the learning process in denoising score matching based on measure concentration properties of high-dimensional random vectors.\nWe adopt the common assumption that the data distribution to be learned is high-dimensional, but only has support around a relatively low-dimensional manifold (Tenenbaum et al., 2000; Roweis and Saul, 2000; Lawrence, 2005). If the assumption holds, it causes a problem for score matching: The density, or the gradient of the density is then undefined outside the manifold, making it difficult\nto train a valid density model for the data distribution defined on the entire space. Saremi and Hyvarinen (2019) and Song and Ermon (2019) discussed this problem and proposed to smooth the data distribution with a Gaussian kernel to alleviate the issue.\nTo further understand the learning in denoising score matching when the data lie on a manifold X and the data dimension is high, two elementary properties of random Gaussian vectors in highdimensional spaces are helpful: First, the length distribution of random vectors becomes concentrated at √ dσ (Vershynin, 2018), where σ2 is the variance of a single dimension. Second, a random vector is always close to orthogonal to a fixed vector (Tao, 2012). With these premises one can visualize the configuration of noisy and noiseless data points that enter the learning process: A data point x sampled from X and its noisy version x̃ always lie on a line which is almost perpendicular to the tangent space TxX and intersects X at x. Further, the distance vectors between (x, x̃) pairs all have similar length √ dσ. As a consequence, the set of noisy data points concentrate on a set X̃√dσ, that has a distance with ( √ dσ − , √ dσ + ) from the data manifold X , where √ dσ.\nTherefore, performing denoising score matching learning with (x, x̃) pairs generated with a fixed noise level σ, which is the approach taken previously except in Song and Ermon (2019), will match the score in the set X̃√dσ, and enable denoising of noisy points in the same set. However, the learning provides little information about the density outside this set, farther or closer to the data manifold, as noisy samples outside X̃√dσ, rarely appear in the training process. An illustration is presented in Figure 1A.\nLet X̃C√ dσ, denote the complement of the set X̃√dσ, . Even if pσ0(x̃ ∈ X̃ C√ dσ, ) is very small in high-dimensional space, the score in X̃C√ dσ,\nstill plays a critical role in sampling from random initialization. This analysis may explain why models based on denoising score matching, trained with a single noise level encounter difficulties in generating data samples when initialized at random. For an empirical support of this explanation, see our experiments with models trained with single noise magnitudes (Appendix B). To remedy this problem, one has to apply a learning procedure of the sort proposed in Song and Ermon (2019), in which samples with different noise levels are used. Depending on the dimension of the data, the different noise levels have to be spaced narrowly enough to avoid empty regions in the data space. In the following, we will use Gaussian noise and employ a Gaussian scale mixture to produce the noisy data samples for the training (for details, See Section 3.1 and Appendix A).\nAnother interesting property of denoising score matching was suggested in the denoising autoencoder literature (Vincent et al., 2010; Karklin and Simoncelli, 2011). With increasing noise level, the learned features tend to have larger spatial scale. In our experiment we observe similar phenomenon when training model with denoising score matching with single noise scale. If one compare samples in Figure B.1, Appendix B, it is evident that noise level of 0.3 produced a model that learned short range correlation that spans only a few pixels, noise level of 0.6 learns longer stroke\nstructure without coherent overall structure, and noise level of 1 learns more coherent long range structure without details such as stroke width variations. This suggests that training with single noise level in denoising score matching is not sufficient for learning a model capable of high-quality sample synthesis, as such a model have to capture data structure of all scales." }, { "heading": "3 LEARNING ENERGY-BASED MODEL WITH MULTISCALE DENOISING SCORE MATCHING", "text": "" }, { "heading": "3.1 MULTISCALE DENOISING SCORE MATCHING", "text": "Motivated by the analysis in section 2, we strive to develop an EBM based on denoising score matching that can be trained with noisy samples in which the noise level is not fixed but drawn from a distribution. The model should approximate the Parzen density estimator of the data pσ0(x̃) =∫ qσ0(x̃|x)p(x)dx. Specifically, the learning should minimize the difference between the derivative of the energy and the score of pσ0 under the expectation EpM (x̃) rather than Epσ0 (x̃), the expectation taken in standard denoising score matching. Here pM (x̃) = ∫ qM (x̃|x)p(x)dx is chosen to cover the signal space more evenly to avoid the measure concentration issue described above. The resulting Multiscale Score Matching (MSM) objective is:\nLMSM (θ) = EpM (x̃) ‖ ∇x̃ log(pσ0(x̃)) +∇x̃E(x̃; θ) ‖ 2 (4)\nCompared to the objective of denoising score matching (1), the only change in the new objective (4) is the expectation. Both objectives are consistent, if pM (x̃) and pσ0(x̃) have the same support, as shown formally in Proposition 1 of Appendix A. In Proposition 2, we prove that Equation 4 is equivalent to the following denoising score matching objective:\nLMDSM∗ = EpM (x̃)qσ0 (x|x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖ 2 (5)\nThe above results hold for any noise kernel qσ0(x̃|x), but Equation 5 contains the reversed expectation, which is difficult to evaluate in general. To proceed, we choose qσ0(x̃|x) to be Gaussian, and also choose qM (x̃|x) to be a Gaussian scale mixture: qM (x̃|x) = ∫ qσ(x̃|x)p(σ)dσ and qσ(x̃|x) = N (x, σ2Id). After algebraic manipulation and one approximation (see the derivation following Proposition 2 in Appendix A), we can transform Equation 5 into a more convenient form, which we call Multiscale Denoising Score Matching (MDSM):\nLMDSM = Ep(σ)qσ(x̃|x)p(x) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖ 2 (6)\nThe square loss term evaluated at noisy points x̃ at larger distances from the true data points x will have larger magnitude. Therefore, in practice it is convenient to add a monotonically decreasing term l(σ) for balancing the different noise scales, e.g. l(σ) = 1σ2 . Ideally, we want our model to learn the correct gradient everywhere, so we would need to add noise of all levels. However, learning denoising score matching at very large or very small noise levels is useless. At very large noise levels the information of the original sample is completely lost. Conversely, in the limit of small noise, the noisy sample is virtually indistinguishable from real data. In neither case one can learn a gradient which is informative about the data structure. Thus, the noise range needs only to be broad enough to encourage learning of data features over all scales. Particularly, we do not sample σ but instead choose a series of fixed σ values σ1 · · ·σK . Further, substituting log(qσ0(x̃|x)) = −\n(x̃−x)2 2σ20 + C\ninto Equation 4, we arrive at the final objective: L(θ) = ∑\nσ∈{σ1···σK}\nEqσ(x̃|x)p(x)l(σ) ‖ x − x̃ + σ 2 0∇x̃E(x̃; θ) ‖2 (7)\nIt may seem that σ0 is an important hyperparameter to our model, but after our approximation σ0 become just a scaling factor in front of the energy function, and can be simply set to one as long as the temperature range during sampling is scaled accordingly (See Section 3.2). Therefore the only hyper-parameter is the rang of noise levels used during training.\nOn the surface, objective (7) looks similar to the one in Song and Ermon (2019). The important difference is that Equation 7 approximates a single distribution, namely pσ0(x̃), the data smoothed with one fixed kernel qσ0(x̃|x). In contrast, Song and Ermon (2019) approximate the score of multiple distributions, the family of distributions {pσi(x̃) : i = 1, ..., n}, resulting from the data smoothed by kernels of different widths σi. Because our model learns only a single target distribution, it does not require noise magnitude as input." }, { "heading": "3.2 SAMPLING BY ANNEALED LANGEVIN DYNAMICS", "text": "Langevin dynamics has been used to sample from neural network energy functions (Du and Mordatch, 2019; Nijkamp et al., 2019). However, the studies described difficulties with mode exploration unless very large number of sampling steps is used. To improve mode exploration, we propose incorporating simulated annealing in the Langevin dynamics. Simulated annealing (Kirkpatrick et al., 1983; Neal, 2001) improves mode exploration by sampling first at high temperature and then cooling down gradually. This has been successfully applied to challenging computational problems, such as combinatorial optimization.\nTo apply simulated annealing to Langevin dynamics. Note that in a model of Brownian motion of a physical particle, the temperature in the Langevin equation enters as a factor √ T in front of the\nnoise term, some literature uses √ β−1 where β = 1/T (Jordan et al., 1998). Adopting the √ T convention, the Langevin sampling process (Bellec et al., 2017) is given by:\nxt+1 = xt − 2\n2 ∇xE(xt; θ) +\n√ TtN (0, Id) (8)\nwhere Tt follows some annealing schedule, and denotes step length, which is fixed. During sampling, samples behave very much like physical particles under Brownian motion in a potential field. Because the particles have average energies close to the their current thermic energy, they explore the state space at different distances from data manifold depending on temperature. Eventually, they settle somewhere on the data manifold. The behavior of the particle’s energy value during a typical annealing process is depicted in Appendix Figure F.1B.\nIf the obtained sample is still slightly noisy, we can apply a single step gradient denoising jump (Saremi and Hyvarinen, 2019) to improve sample quality:\nxclean = xnoisy − σ20∇xE(xnoisy; θ) (9) This denoising procedure can be applied to noisy sample with any level of Gaussian noise because in our model the gradient automatically has the right magnitude to denoise the sample. This process is justified by the Empirical Bayes interpretation of this denoising process, as studied in Saremi and Hyvarinen (2019).\nSong and Ermon (2019) also call their sample generation process annealed Langevin dynamics. It should be noted that their sampling process does not coincide with Equation 8. Their sampling procedure is best understood as sequentially sampling a series of distributions corresponding to data distribution corrupted by different levels of noise." }, { "heading": "4 IMAGE MODELING RESULTS", "text": "Training and Sampling Details. The proposed energy-based model is trained on standard image datasets, specifically MNIST, Fashion MNIST, CelebA (Liu et al., 2015) and CIFAR-10 (Krizhevsky et al., 2009). During training we set σ0 = 0.1 and train over a noise range of σ ∈ [0.05, 1.2], with the different noise uniformly spaced on the batch dimension. For MNIST and Fashion MNIST we used geometrically distributed noise in the range [0.1, 3]. The weighting factor l(σ) is always set to 1/σ2 to make the square term roughly independent of σ. We fix the batch size at 128 and use the Adam optimizer with a learning rate of 5 × 10−5. For MNIST and Fashion MNIST, we use a 12-Layer ResNet with 64 filters, for the CelebA and CIFAT-10 data sets we used a 18-Layer ResNet with 128 filters (He et al., 2016a;b). No normalization layer was used in any of the networks. We designed the output layer of all networks to take a generalized quadratic form (Fan et al., 2018). Because the energy function is anticipated to be approximately quadratic with respect to the noise level, this modification was able to boost the performance significantly. For more detail on training\nand model architecture, see Appendix D. One notable result is that since our training method does not involve sampling, we achieved a speed up of roughly an order of magnitude compared to the maximum-likelihood training using Langevin dynamics 1. Our method thus enables the training of energy-based models even when limited computational resources prohibit maximum likelihood methods.\nWe found that the choice of the maximum noise level has little effect on learning as long as it is large enough to encourage learning of the longest range features in the data. However, as expected, learning with too small or too large noise levels is not beneficial and can even destabilize the training process. Further, our method appeared to be relatively insensitive to how the noise levels are distributed over a chosen range. Geometrically spaced noise as in (Song and Ermon, 2019) and linearly spaced noise both work, although in our case learning with linearly spaced noise was somewhat more robust.\nFor sampling the learned energy function we used annealed Langevin dynamics with an empirically optimized annealing schedule,see Figure F.1B for the particular shape of annealing schedule we used. In contrast, annealing schedules with theoretical guaranteed convergence property takes extremely long (Geman and Geman, 1984). The range of temperatures to use in the sampling process depends on the choice of σ0, as the equilibrium distribution is roughly images with Gaussian noise of magnitude √ Tσ0 added on top. To ease traveling between modes far apart and ensure even sampling, the initial temperature needs to be high enough to inject noise of sufficient magnitude. A choice of T = 100, which corresponds to added noise of magnitude √ 100 ∗ 0.1 = 1, seems to be sufficient starting point. For step length we generally used 0.02, although any value within the range [0.015, 0.05] seemed to work fine. After the annealing process we performed a single step denoising to further enhance sample quality.\n1For example, on a single GPU, training MNIST with a 12-layer Resnet takes 0.3s per batch with our method, while maximum likelihood training with a modest 30 Langevin step per weight update takes 3s per batch. Both methods need similar number of weight updates to train.\nUnconditional Image Generation. We demonstrate the generative ability of our model by displaying samples obtained by annealed Langevin sampling and single step denoising jump. We evaluated 50k sampled images after training on CIFAR-10 with two performance scores, Inception (Salimans et al., 2016) and FID (Heusel et al., 2017). We achieved Inception Score of 8.31 and FID of 31.7, comparable to modern GAN approaches. Scores for CelebA dataset are not reported here as they are not commonly reported and may depend on the specific pre-processing used. More samples and training images are provided in Appendix for visual inspection. We believe that visual assessment is still essential because of the possible issues with the Inception score (Barratt and Sharma, 2018). Indeed, we also found that the visually impressive samples were not necessarily the one achieving the highest Inception Score.\nAlthough overfitting is not a common concern for generative models, we still tested our model for overfitting. We found no indication for overfitting by comparing model samples with their nearest neighbors in the data set, see Figure C.1 in Appendix.\nMode Coverage. We repeated with our model the 3 channel MNIST mode coverage experiment similar to the one in Kumar et al. (2019). An energy-based model was trained on 3-channel data where each channel is a random MNIST digit. Then 8000 samples were taken from the model and each channel was classified using a small MNIST classifier network. We obtained results of the 966 modes, comparable to GAN approaches. Training was successful and our model assigned low energy to all the learned modes, but some modes were not accessed during sampling, likely due to the Langevin Dynamics failing to explore these modes. A better sampling technique such as HMC Neal et al. (2011) or a Maximum Entropy Generator (Kumar et al., 2019) could improve this result.\nImage Inpainting. Image impainting can be achieved with our model by clamping a part of the image to ground truth and performing the same annealed Langevin and Jump sampling procedure on the missing part of the image. Noise appropriate to the sampling temperature need to be added to the clamped inputs. The quality of inpainting results of our model trained on CelebA and CIFAR-10 can be assessed in Figure 3. For CIFAR-10 inpainting results we used the test set.\nLog likelihood estimation. For energy-based models, the log density can be obtained after estimating the partition function with Annealed Importance Sampling (AIS) (Salakhutdinov and Murray, 2008) or Reverse AIS (Burda et al., 2015). In our experiment on CIFAR-10 model, similar to reports in Du and Mordatch (2019), there is still a substantial gap between AIS and Reverse AIS estimation, even after very substantial computational effort. In Table 1, we report result from Reverse AIS, as it tends to over-estimate the partition function thus underestimate the density. Note that although\n2Author reported difficulties evaluating Likelihood 3Upper Bound obtained by Reverse AIS\ndensity values and likelihood values are not directly comparable, we list them together due to the sheer lack of a density model for CIFAR-10.\nWe also report a density of 1.21 bits/dim on MNIST dataset, and we refer readers to Du and Mordatch (2019) for comparison to other models on this dataset. More details on this experiment is provided in the Appendix.\nOutlier Detection. Choi and Jang (2018) and Nalisnick et al. (2019) have reported intriguing behavior of high dimensional density models on out of distribution samples. Specifically, they showed that a lot of models assign higher likelihood to out of distribution samples than real data samples. We investigated whether our model behaves similarly.\nOur energy function is only trained outside the data manifold where samples are noisy, so the energy value at clean data points may not always be well behaved. Therefore, we added noise with magnitude σ0 before measuring the energy value. We find that our network behaves similarly to previous likelihood models, it assigns lower energy, thus higher density, to some OOD samples. We show one example of this phenomenon in Appendix Figure F.1A.\nWe also attempted to use the denoising performance, or the objective function to perform outlier detection. Intriguingly, the results are similar as using the energy value. Denoising performance seems to correlate more with the variance of the original image than the content of the image." }, { "heading": "5 DISCUSSION", "text": "In this work we provided analyses and empirical results for understanding the limitations of learning the structure of high-dimensional data with denoising score matching. We found that the objective function confines learning to a small set due to the measure concentration phenomenon in random vectors. Therefore, sampling the learned distribution outside the set where the gradient is learned does not produce good result. One remedy to learn meaningful gradients in the entire space is to use samples during learning that are corrupted by different amounts of noise. Indeed, Song and Ermon (2019) applied this strategy very successfully.\nThe central contribution of our paper is to investigate how to use a similar learning strategy in EBMs. Specifically, we proposed a novel EBM model, the Multiscale Denoising Score Matching (MDSM) model. The new model is capable of denoising, producing high-quality samples from random noise, and performing image inpainting. While also providing density information, our model learns an order of magnitude faster than models based on maximum likelihood.\nOur approach is conceptually similar to the idea of combining denoising autoencoder and annealing (Geras and Sutton, 2015; Chandra and Sharma, 2014; Zhang and Zhang, 2018) though this idea was proposed in the context of pre-training neural networks for classification applications. Previous efforts of learning energy-based models with score matching (Kingma and LeCun, 2010; Song et al., 2019) were either computationally intensive or unable to produce high-quality samples comparable to those obtained by other generative models such as GANs. Saremi et al. (2018) and Saremi and Hyvarinen (2019) trained energy-based model with the denoising score matching objective but the resulting models cannot perform sample synthesis from random noise initialization.\nRecently, Song and Ermon (2019) proposed the NCSN model, capable of high-quality sample synthesis. This model approximates the score of a family of distributions obtained by smoothing the data by kernels of different widths. The sampling in the NCSN model starts with sampling the distribution obtained with the coarsest kernel and successively switches to distributions obtained with finer kernels. Unlike NCSN, our method learns an energy-based model corresponding to pσ0(x̃) for a fixed σ0. This method improves score matching in high-dimensional space by matching the gradient of an energy function to the score of pσ0(x̃) in a set that avoids measure concentration issue.\nAll told, we offer a novel EBM model that achieves high-quality sample synthesis, which among other EBM approaches provides a new state-of-the art. Compared to the NCSN model, our model is more parsimonious than NCSN and can support single step denoising without prior knowledge of the noise magnitude. But our model performs sightly worse than the NCSN model, which could have several reasons. First, the derivation of Equation 6 requires an approximation to keep the training procedure tractable, which could reduce the performance. Second, the NCSNs output is a vector that, at least during optimization, does not always have to be the derivative of a scalar function. In\ncontrast, in our model the network output is a scalar function. Thus it is possible that the NCSN model performs better because it explores a larger set of functions during optimization." }, { "heading": "A MDSM OBJECTIVE", "text": "In this section, we provide a formal discussion of the MDSM objective and suggest it as an improved score matching formulation in high-dimensional space.\nVincent (2011) illustrated the connection between the model score −∇x̃E(x̃; θ) with the score of Parzen window density estimator ∇x̃ log(pσ0(x̃)). Specifically, the objective is Equation 1 which we restate here:\nLSM (θ) = Epσ0(x̃) ‖ ∇x̃ log(pσ0(x̃)) +∇x̃E(x̃; θ) ‖ 2 (10)\nOur key observation is: in high-dimensional space, due the concentration of measure, the expectation w.r.t. pσ0(x̃) over weighs a thin shell at roughly distance √ dσ to the empirical distribution p(x). Though in theory this is not a problem, in practice this leads to results that the score are only well matched on this shell. Based on this observation, we suggest to replace the expectation w.r.t. pσ0(x̃) with a distribution pσ′(x̃) that has the same support as pσ0(x̃) but can avoid the measure concentration problem. We call this multiscale score matching and the objective is the following:\nLMSM (θ) = EpM (x̃) ‖ ∇x̃ log(pσ0(x̃)) +∇x̃E(x̃; θ) ‖ 2 (11)\nProposition 1. LMSM (θ) = 0 ⇐⇒ LSM (θ) = 0 ⇐⇒ θ = θ∗.\nGiven that pM (x̃) and pσ0(x̃) has the same support, it’s clear thatLMSM = 0 would be equivalent to LSM = 0. Due to the proof of the Theorem 2 in Hyvärinen (2005), we have LSM (θ) ⇐⇒ θ = θ∗. Thus, LMSM (θ) = 0 ⇐⇒ θ = θ∗.\nProposition 2. LMSM (θ)^ LMDSM∗ = EpM (x̃)qσ0(x|x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖2.\nWe follow the same procedure as in Vincent (2011) to prove this result.\nJMSM (θ) = EpM (x̃) ‖ ∇x̃ log(pσ0(x̃)) +∇x̃E(x̃; θ) ‖ 2\n= EpM (x̃) ‖ ∇x̃E(x̃; θ) ‖ 2 +2S(θ) + C\nS(θ) = EpM (x̃)〈∇x̃ log(pσ0(x̃)),∇x̃E(x̃; θ)〉\n= ∫ x̃ pM (x̃)〈∇x̃ log(pσ0(x̃)),∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ pM (x̃)〈 ∇x̃pσ0(x̃) pσ0(x̃) ,∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ pM (x̃) pσ0(x̃) 〈∇x̃pσ0(x̃),∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ pM (x̃) pσ0(x̃) 〈∇x̃ ∫ x p(x)qσ0(x̃|x)dx,∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ pM (x̃) pσ0(x̃) 〈 ∫ x p(x)∇x̃qσ0(x̃|x)dx,∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ pM (x̃) pσ0(x̃) 〈 ∫ x p(x)qσ0(x̃|x)∇x̃ log qσ0(x̃|x)dx,∇x̃E(x̃; θ)〉 dx̃\n= ∫ x̃ ∫ x pM (x̃) pσ0(x̃) p(x)qσ0(x̃|x)〈∇x̃ log qσ0(x̃|x),∇x̃E(x̃; θ)〉 dx̃dx\n= ∫ x̃ ∫ x pM (x̃) pσ0(x̃) pσ0(x̃,x)〈∇x̃ log qσ0(x̃|x),∇x̃E(x̃; θ)〉 dx̃dx\n= ∫ x̃ ∫ x pM (x̃)qσ0(x|x̃)〈∇x̃ log qσ0(x̃|x),∇x̃E(x̃; θ)〉 dx̃dx\nThus we have:\nLMSM (θ) = EpM (x̃) ‖ ∇x̃E(x̃; θ) ‖ 2 +2S(θ) + C\n= EpM (x̃)qσ0 (x|x̃) ‖ ∇x̃E(x̃; θ) ‖ 2 +2EpM (x̃)qσ0 (x|x̃)〈∇x̃ log qσ0(x̃|x),∇x̃E(x̃; θ)〉+ C = EpM (x̃)qσ0(x|x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖ 2 +C ′\nSo LMSM (θ)^ LMDSM∗ .\nThe above analysis applies to any noise distribution, not limited to Gaussian. but LMDSM∗ has a reversed expectation form that is not easy to work with. To proceed further we study the case where qσ0(x̃|x) is Gaussian and choose qM (x̃|x) as a Gaussian scale mixture (Wainwright and Simoncelli, 2000) and pM (x̃) = ∫ qM (x̃|x)p(x)dx. By Proposition 1 and Proposition 2, we have the following form to optimize:\nLMDSM∗(θ) = ∫ x̃ ∫ x pM (x̃)qσ0(x|x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖2 dx̃dx\n= ∫ x̃ ∫ x qσ0(x|x̃) qM (x|x̃) pM (x̃)qM (x|x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖2 dx̃dx\n= ∫ x̃ ∫ x qσ0(x|x̃) qM (x|x̃) pM (x, x̃) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖2 dx̃dx\n= ∫ x̃ ∫ x qσ0(x|x̃) qM (x|x̃) qM (x̃|x)p(x) ‖ ∇x̃ log(qσ0(x̃|x)) +∇x̃E(x̃; θ) ‖2 dx̃dx (*) ≈ LMDSM (θ)\nTo minimize Equation (*), we can use the following importance sampling procedure (Russell and Norvig, 2016): we can sample from the empirical distribution p(x), then sample the Gaussian scale mixture qM (x̃|x) and finally weight the sample by qσ0 (x|x̃) qM (x|x̃) . We expect the ratio to be close to 1 for the following reasons: Using Bayes rule, qσ0(x|x̃) = p(x)qσ0 (x̃|x)\npσ0 (x̃) we can see that qσ0(x|x̃) only has\nsupport on discret data points x, same thing holds for qM (x|x̃). because in x̃ is generated by adding Gaussian noise to real data sample, both estimators should give results highly concentrated on the original sample point x. Therefore, in practice, we ignore the weighting factor and use Equation 6. Improving upon this approximation is left for future work." }, { "heading": "B PROBLEM WITH SINGLE NOISE DENOISING SCORE MATCHING", "text": "To compare with previous method, we trained energy-based model with denoising score matching using one noise level on MNIST, initialized the sampling with Gaussian noise of the same level, and sampled with Langevin dynamics at T = 1 for 1000 steps and perform one denoise jump to recover the model’s best estimate of the clean sample, see Figure B.1. We used the same 12-layer ResNet as other MNIST experiments. Models were trained for 100000 steps before sampling." }, { "heading": "C OVERFITTING TEST", "text": "We demonstrate that the model does not simply memorize training examples by comparing model samples with their nearest neighbors in the training set. We use Fashion MNIST for this demonstration because overfitting can occur there easier than on more complicated datasets, see Figure C.1." }, { "heading": "D DETAILS ON TRAINING AND SAMPLING", "text": "We used a custom designed ResNet architecture for all experiments. For MNIST and FashionMNIST we used a 12-layer ResNet with 64 filters on first layer, while for CelebA and CIFAR dataset\nwe used a 18-layer ResNet with 128 filters on the first layer. All network used the ELU activation function. We did not use any normalization in the ResBlocks and the filer number is doubled at each downsampling block. Details about the structure of our networks used can be found in our code release. All mentioned models can be trained on 2 GPUs within 2 days.\nSince the gradient of our energy model scales linearly with the noise, we expected our energy function to scale quadratically with noise magnitude. Therefore, we modified the standard energy-based network output layer to take a flexible quadratic form (Fan et al., 2018):\nEout = ( ∑ i aihi + b1)( ∑ i cihi + b2) + ∑ i dih 2 i + b3 (12)\nwhere ai, ci, di and b1, b2, b3 are learnable parameters, and hi is the (flattened) output of last residual block. We found this modification to significantly improve performance compared to using a simple linear last layer.\nFor CIFAR and CelebA results we trained for 300k weight updates, saving a checkpoint every 5000 updates. We then took 1000 samples from each saved networks and used the network with the lowest\nFID score. For MNIST and fashion MNIST we simply trained for 100k updates and used the last checkpoint. During training we pad MNIST and Fashion MNIST to 32*32 for convenience and randomly flipped CelebA images. No other modification was performed. We only constrained the gradient of the energy function, the energy value itself could in principle be unbounded. However, we observed that they naturally stabilize so we did not explicitly regularize them. The annealing sampling schedule is optimized to improve sample quality for CIFAR-10 dataset, and consist of a total of 2700 steps. For other datasets the shape has less effect on sample quality, see Figure F.1 B for the shape of annealing schedule used.\nFor the Log likelihood estimation we initialized reverse chain on test images, then sample 10000 intermediate distribution using 10 steps HMC updates each. Temperature schedule is roughly exponential shaped and the reference distribution is an isotropic Gaussian. The variance of estimation was generally less than 10% on the log scale. Due to the high variance of results, and to avoid getting dominated by a single outlier, we report average of the log density instead of log of average density." }, { "heading": "E EXTENDED SAMPLES AND INPAINTING RESULTS", "text": "We provide more inpainting examples and further demonstrate the mixing during sampling process in Figure E.1. We also provide more samples for readers to visually judge the quality of our sample generation in Figure E.2, E.3 and E.4. All samples are randomly selected.\nFigure E.1: Denoised Sampling process and inpainting results. Sampling process is from left to right.\nFigure E.2: Extended Fashion MNIST and MNIST samples" }, { "heading": "F SAMPLING PROCESS AND ENERGY VALUE COMPARISONS", "text": "Here we show how the average energy of samples behaves vs the sampling temperature. We also show an example of our model making out of distribution error that is common in most other likelihood based models (Nalisnick et al., 2019) Figure F.1.\nFigure E.3: Samples (left panel) from network trained on CelebA, and training examples from the dataset (right panel).\nFigure E.4: Samples (left panel) from energy-based model trained on CIFAR-10 next to training examples (right panel).\nB.A.\nFigure F.1: A. Energy values for CIFAR-10 train, CIFAR-10 test and SVHN datasets for a network trained on CIFAR-10 images. Note that the network does not over fit to the training set, but just like most deep likelihood model, it assigns lower energy to SVHN images than its own training data. B. Annealing schedule and a typical energy trace for a sample during Annealed Langevin Sampling. The energy of the sample is proportional to the temperature, indicating sampling is close to a quasi-static process." } ]
2,019
null
SP:e958fbb0b004f454b79944ca72958254087147d4
[ "This paper proposes stable GradientLess Descent (GLD) algorithms that do not rely on gradient estimate. Based on the low-rank assumption on P_A, the iteration complexity is poly-logarithmically dependent on dimensionality. The theoretical analysis of the main results is based on a geometric perspective, which is interesting. The experimental results on synthetic and MuJoCo datasets validate the effectiveness of the proposed algorithms.", "The paper proposes a novel zeroth-order algorithm for high-dimensional optimization. In particular, the algorithm as an instance of direct search algorithms where no attempt is made to estimate the gradient of the function during the optimization process. The authors study the optimization of monotone transformations of strongly-convex and smooth functions and they prove complexity bounds as a function of the condition number, the dimensionality and the desired accuracy. These results are also extended to the case where the function actually depends on a lower-dimensional input. Without any knowledge of the actual subspace of interest, the algorithm is able to adapt to the (lower) dimensionality of the problem. The proposed algorithms are tested on synthetic optimization problems and in a few Mujoco environments for policy optimization." ]
Zeroth-order optimization is the process of minimizing an objective f(x), given oracle access to evaluations at adaptively chosen inputs x. In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze our algorithm from a novel geometric perspective and present a novel analysis that shows convergence within an -ball of the optimum in O(kQ log(n) log(R/ )) evaluations, for any monotone transform of a smooth and strongly convex objective with latent dimension k < n, where the input dimension is n, R is the diameter of the input space and Q is the condition number. Our rates are the first of its kind to be both 1) poly-logarithmically dependent on dimensionality and 2) invariant under monotone transformations. We further leverage our geometric perspective to show that our analysis is optimal. Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on BBOB and MuJoCo benchmarks.
[ { "affiliations": [], "name": "Daniel Golovin" }, { "affiliations": [], "name": "John Karro" }, { "affiliations": [], "name": "Greg Kochanski" }, { "affiliations": [], "name": "Chansoo Lee" }, { "affiliations": [], "name": "Xingyou Song" }, { "affiliations": [], "name": "Qiuyi (Richard) Zhang" } ]
[ { "authors": [ "Kenneth J Arrow", "Alain C Enthoven" ], "title": "Quasi-concave programming", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1961 }, { "authors": [ "Anne Auger", "Nikolaus Hansen" ], "title": "A restart cma evolution strategy with increasing population size", "venue": "In Evolutionary Computation,", "year": 2005 }, { "authors": [ "Krishnakumar Balasubramanian", "Saeed Ghadimi" ], "title": "Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Samuel H Brooks" ], "title": "A discussion of random methods for seeking maxima", "venue": "Operations research,", "year": 1958 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Krzysztof Choromanski", "Mark Rowland", "Vikas Sindhwani", "Richard E Turner", "Adrian Weller" ], "title": "Structured evolution with compact architectures for scalable policy optimization", "venue": "arXiv preprint arXiv:1804.02395,", "year": 2018 }, { "authors": [ "Josip Djolonga", "Andreas Krause", "Volkan Cevher" ], "title": "High-dimensional gaussian process bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Mahdi Dodangeh", "Luís N Vicente" ], "title": "Worst case complexity of direct search under convexity", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "John C Duchi", "Michael I Jordan", "Martin J Wainwright", "Andre Wibisono" ], "title": "Optimal rates for zero-order convex optimization: The power of two function evaluations", "venue": "IEEE Transactions on Information Theory,", "year": 2015 }, { "authors": [ "Abraham D Flaxman", "Adam Tauman Kalai", "H Brendan McMahan" ], "title": "Online convex optimization in the bandit setting: gradient descent without a gradient", "venue": "In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms,", "year": 2005 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Eduard Gorbunov", "Adel Bibi", "Ozan Sener", "El Houcine Bergou", "Peter Richtárik" ], "title": "A stochastic derivative free optimization method with momentum", "venue": null, "year": 1905 }, { "authors": [ "Serge Gratton", "Clément W Royer", "Luís Nunes Vicente", "Zaikun Zhang" ], "title": "Direct search based on probabilistic descent", "venue": "SIAM Journal on Optimization,", "year": 2015 }, { "authors": [ "Nikolaus Hansen", "Steffen Finck", "Raymond Ros", "Anne Auger" ], "title": "Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions", "venue": "Research Report RR-6829,", "year": 2009 }, { "authors": [ "Elad Hazan", "Adam Klivans", "Yang Yuan" ], "title": "Hyperparameter optimization: A spectral approach", "venue": "arXiv preprint arXiv:1706.00764,", "year": 2017 }, { "authors": [ "Shengqiao Li" ], "title": "Concise formulas for the area and volume of a hyperspherical cap", "venue": "Asian Journal of Mathematics and Statistics,", "year": 2011 }, { "authors": [ "Sijia Liu", "Jie Chen", "Pin-Yu Chen", "Alfred O Hero" ], "title": "Zeroth-order online alternating direction method of multipliers: Convergence analysis and applications", "venue": "arXiv preprint arXiv:1710.07804,", "year": 2017 }, { "authors": [ "Sijia Liu", "Bhavya Kailkhura", "Pin-Yu Chen", "Paishun Ting", "Shiyu Chang", "Lisa Amini" ], "title": "Zerothorder stochastic variance reduction for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search provides a competitive approach to reinforcement learning", "venue": "arXiv preprint arXiv:1803.07055,", "year": 2018 }, { "authors": [ "H. Brendan McMahan", "Matthew J. Streeter" ], "title": "Adaptive bound optimization for online convex optimization", "venue": "In COLT 2010 - The 23rd Conference on Learning", "year": 2010 }, { "authors": [ "Yurii Nesterov", "Vladimir Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Technical report, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE),", "year": 2011 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Ohad Shamir" ], "title": "An optimal algorithm for bandit and zero-order convex optimization with two-point feedback", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Sebastian U Stich", "Christian L Muller", "Bernd Gartner" ], "title": "Optimization of convex functions with random pursuit", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Yining Wang", "Simon Du", "Sivaraman Balakrishnan", "Aarti Singh" ], "title": "Stochastic zeroth-order optimization in high dimensions", "venue": "arXiv preprint arXiv:1710.10551,", "year": 2017 }, { "authors": [ "Ziyu Wang", "Masrour Zoghi", "Frank Hutter", "David Matheson", "Nando De Freitas" ], "title": "Bayesian optimization in high dimensions via random embeddings", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "We consider the problem of zeroth-order optimization (also known as gradient-free optimization, or bandit optimization), where our goal is to minimize an objective function f : Rn → R with as few evaluations of f(x) as possible. For many practical and interesting objective functions, gradients are difficult to compute and there is still a need for zeroth-order optimization in applications such as reinforcement learning (Mania et al., 2018; Salimans et al., 2017; Choromanski et al., 2018), attacking neural networks (Chen et al., 2017; Papernot et al., 2017), hyperparameter tuning of deep networks (Snoek et al., 2012), and network control (Liu et al., 2017).\nThe standard approach to zeroth-order optimization is, ironically, to estimate the gradients from function values and apply a first-order optimization algorithm (Flaxman et al., 2005). Nesterov & Spokoiny (2011) analyze this class of algorithms as gradient descent on a Gaussian smoothing of the objective and gives an accelerated O(n √ Q log((LR2 + F )/ )) iteration complexity for an LLipschitz convex function with condition number Q and R = ‖x0 − x∗‖ and F = f(x0) − f(x∗). They propose a two-point evaluation scheme that constructs gradient estimates from the difference between function values at two points that are close to each other. This scheme was extended by (Duchi et al., 2015) for stochastic settings, by (Ghadimi & Lan, 2013) for nonconvex settings, and by (Shamir, 2017) for non-smooth and non-Euclidean norm settings. Since then, first-order techniques such as variance reduction (Liu et al., 2018), conditional gradients (Balasubramanian & Ghadimi, 2018), and diagonal preconditioning (Mania et al., 2018) have been successfully adopted in this setting. This class of algorithms are also known as stochastic search, random search, or (natural) evolutionary strategies and have been augmented with a variety of heuristics, such as the popular CMA-ES (Auger & Hansen, 2005).\nThese algorithms, however, suffer from high variance due to non-robust local minima or highly non-smooth objectives, which are common in the fields of deep learning and reinforcement learn-\n∗Author list in alphabetical order.\ning. Mania et al. (2018) notes that gradient variance increases as training progresses due to higher variance in the objective functions, since often parameters must be tuned precisely to achieve reasonable models. Therefore, some attention has shifted into direct search algorithms that usually finds a descent direction u and moves to x + δu, where the step size is not scaled by the function difference.\nThe first approaches for direct search were based on deterministic approaches with a positive spanning set and date back to the 1950s (Brooks, 1958). Only recently have theoretical bounds surfaced, with Gratton et al. (2015) giving an iteration complexity that is a large polynomial of n and Dodangeh & Vicente (2016) giving an improved O(n2L2/ ). Stochastic approaches tend to have better complexities: Stich et al. (2013) uses line search to give a O(nQ log(F/ )) iteration complexity for convex functions with condition number Q and most recently, Gorbunov et al. (2019) uses importance sampling to give a O(nQ̄ log(F/ )) complexity for convex functions with average condition number Q̄, assuming access to sampling probabilities. Stich et al. (2013) notes that direct search algorithms are invariant under monotone transforms of the objective, a property that might explain their robustness in high-variance settings.\nIn general, zeroth order optimization suffers an at least linear dependence on input dimension n and recent works have tried to address this limitation when n is large but f(x) admits a low-dimensional structure. Some papers assume that f(x) depends only on k coordinates and Wang et al. (2017) applies Lasso to find the important set of coordinates, whereas Balasubramanian & Ghadimi (2018) simply change the step size to achieve an O(k(log(n)/ )2) iteration complexity. Other papers assume more generally that f(x) = g(PAx) only depends on a k-dimensional subspace given by the range of PA and Djolonga et al. (2013) apply low-rank approximation to find the low-dimensional subspace while Wang et al. (2013) use random embeddings. Hazan et al. (2017) assume that f(x) is a sparse collection of k-degree monomials on the Boolean hypercube and apply sparse recovery to achieve a O(nk) runtime bound. We will show that under the case that f(x) = g(PAx), our algorithm will inherently pick up any low-dimensional structure in f(x) and achieve a convergence rate that depends on k log(n). This initial convergence rate survives, even if we perturb f(x) = g(PAx) + h(x), so long as h(x) is sufficiently small.\nWe will not cover the whole variety of black-box optimization methods, such as Bayesian optimization or genetic algorithms. In general, these methods attempt to solve a broader problem (e.g. multiple optima), have weaker theoretical guarantees and may require substantial computation at each step: e.g. Bayesian optimization generally has theoretical iteration complexities that grow exponentially in dimension, and CMA-ES lacks provable complexity bounds beyond convex quadratic functions. In addition to the slow runtime and weaker guarantees, Bayesian optimization assumes the success of an inner optimization loop of the acquisition function. This inner optimization is often implemented with many iterations of a simpler zeroth-order methods, justifying the need to understand gradient-less descent algorithms within its own context." }, { "heading": "1.1 OUR CONTRIBUTIONS", "text": "In this paper, we present GradientLess Descent (GLD), a class of truly gradient-free algorithms (also known as direct search algorithms) that are parameter free and provably fast. Our algorithms are based on a simple intuition: for well-conditioned functions, if we start from a point and take a small step in a randomly chosen direction, there is a significant probability that we will reduce the objective function value. We present a novel analysis that relies on facts in high dimensional geometry and can thus be viewed as a geometric analysis of gradient-free algorithms, recovering the standard convergence rates and step sizes. Specifically, we show that if the step size is on the order of O( 1√\nn ), we can guarantee an expected decrease of 1 − Ω( 1n ) in the optimality gap, based\non geometric properties of the sublevel sets of a smooth and strongly convex function.\nOur results are invariant under monotone transformations of the objective function, thus our convergence results also hold for a large class of non-convex functions that are a subclass of quasi-convex functions. Specifically, note that monotone transformations of convex functions are not necessarily convex. However, a monotone transformation of a convex function is always quasi-convex. The maximization of quasi-concave utility functions, which is equivalent to the minimization of quasiconvex functions, is an important topic of study in economics (e.g. Arrow & Enthoven (1961)).\nIntuition suggests that the step-size dependence on dimensionality can be improved when f(x) admits a low-dimensional structure. With a careful choice of sampling distribution we can show that if f(x) = g(PAx), where PA is a rank k matrix, then our step size can be on the order ofO( 1√k ) as our optimization behavior is preserved under projections. We call this property affine-invariance and show that the number of function evaluations needed for convergence depends logarithmically on n. Unlike most previous algorithms in the high-dimensional setting, no expensive sparse recovery or subspace finding methods are needed. Furthermore, by novel perturbation arguments, we show that our fast convergence rates are robust and holds even under the more realistic assumption when f(x) = g(PAx) + h(x) with h(x) being sufficiently small.\nTheorem 1 (Convergence of GLD: Informal Restatement of Theorem 7 and Theorem 14). Let f(x) be any monotone transform of a convex function with condition number Q and R = ‖x0 − x∗‖. Let y be a sample from an appropriate distribution centered at x. Then, with constant probability,\nf(y)− f(x∗) ≤ (f(x)− f(x∗)) ( 1− 15nQ )\nTherefore, we can find xT such that ‖xT−x∗‖ ≤ after T = Õ(nQ log(R/ )) function evaluations. Furthermore, for functions f(x) = g(PAx) + h(x) with rank k matrix PA and sufficiently small h(x), we only require Õ(kQ log(n) log(R/ )) evaluations.\nAnother advantage of our non-standard geometric analysis is that it allows us to deduce that our rates are optimal with a matching lower bound (up to logarithmic factors), presenting theoretical evidence that gradient-free inherently requires Ω(nQ) function evaluations to converge. While gradient-estimation algorithms can achieve a better theoretical iteration complexity of O(n √ Q), they lack the monotone and affine invariance properties. Empirically, we see that invariance properties are important to successful optimization, as validated by experiments on synthetic BBOB and MuJoCo benchmarks that show the competitiveness of GLD against standard optimization procedures." }, { "heading": "2 PRELIMINARIES", "text": "We first define a few notations for the rest of the paper. Let X be a compact subset of Rn and let ‖ · ‖ denote the Euclidean norm. The diameter of X , denoted ‖X‖ = maxx,x′∈X ‖x − x′‖, is the maximum distance between elements in X . Let f : X → R be a real-valued function which attains its minimum at x∗. We use f(X ) = {f(x) : x ∈ X} to denote the image of f on a subset X of Rn, and B(c, r) = {x ∈ Rn : ‖c− x‖ ≤ r} to denote the ball of radius r centered at c. Definition 2. The level set of f at point x ∈ X is Lc(f) = {y ∈ X : f(y) = f(x)}. The sub-level set of f at point x ∈ X is L↓c(f) = {y ∈ X : f(y) ≤ f(x)}. When the function f is clear from the context, we omit it.\nDefinition 3. We say that f is α-strongly convex for α > 0 if f(y) ≥ f(x) + 〈∇f(x), y − x〉 + α 2 ‖y−x‖ 2 for all x, y ∈ X and β-smooth for β > 0 if f(y) ≤ f(x) + 〈∇f(x), y−x〉+ β2 ‖y−x‖ 2 for all x, y ∈ X .\nDefinition 4. We say that g◦f is a monotone transformation of f if g : f(X )→ R is a monotonically (and strictly) increasing function.\nMonotone transformations preserve the level sets of a function in the sense that Lx(f) = Lx(g ◦ f). Because our algorithms depend only on the level set properties, our results generalize to any monotone transformation of a strongly convex and strongly smooth function. This leads to our extended notion of condition number. Definition 5. A function f has condition number Q ≥ 1 if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and g is α-strongly convex and β smooth.\nWhen we work with low rank extensions of f , we only care about the condition number of f within a rank k subspace. Indeed, if f only varies along a rank k subspace, then it has a strong convexity value of 0, making its condition number undefined. If f is α-strongly convex and β-smooth, then its Hessian matrix always has eigenvalues bounded between α and β. Therefore, we need a notion of a projected condition number. Let A ∈ Rd×k be some orthonormal matrix and let PA = AA> be the projection matrix onto the column space of A. Definition 6. For some orthonormal A ∈ Rd×k with d > k, a function f has condition number restricted to A, Q(A) ≥ 1, if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and h(y) = g(Ay) is α-strongly convex and β smooth." }, { "heading": "3 ANALYSIS OF DESCENT STEPS", "text": "The GLD template can be summarized as follows: given a sampling distribution D, we start at x0 and in iteration t, we choose a scalar radii rt and we sample yt from a distribution rtD centered around xt, where rt provides the scaling of D. Then, if f(yt) < f(xt), we update xt+1 = yt; otherwise, we set xt+1 = xt. The analysis of GLD follows from the main observation that the sublevel set of a monotone transformation of a strongly convex and strongly smooth function contains a ball of sufficiently large radius tangent to the level set (Lemma 15). In this section, we show that this property, combined with facts of high-dimensional geometry, implies that moving in a random direction from any point has a good chance of significantly improving the objective.\nAs we mentioned before, the key to fast convergence is the careful choice of step sizes, which we describe in Theorem 7. The intuition here is that we would like to take as large steps as possible while keeping the probability of improving the objective function reasonably high, so by insights in high-dimensional geometry, we choose a step size of Θ(1/ √ n). Also, we show that if f(x) admits a latent rank-k structure, then this step size can be increased to Θ(1/ √ k) and is therefore only dependent on the latent dimensionality of f(x), allowing for fast high-dimensional optimization. Lastly, our geometric understanding allows us to show that our convergence rates are optimal with a matching lower bound. Without loss of generality, this section assumes that f(x) is strongly convex and smooth with condition number Q." }, { "heading": "3.1 STEP SIZE", "text": "Theorem 7. For any x such that 35Q ‖x− x ∗‖ ∈ [C1, C2], we can find integers 0 ≤ k1, k2 < log C2C1 such that if r = 2k1C1 or r = 2−k2C2, then a random sample y from uniform distribution over Bx = B(x, r√n ) satisfies\nf(y)− f(x∗) ≤ (f(x)− f(x∗)) ( 1− 15nQ )\nwith probability at least 14 .\nProving the above theorem requires the following lemma about the intersection of balls in high dimensions and it is proved in the appendix. Lemma 8. Let B1 and B2 be two balls in Rn of radii r1 and r2 respectively. Let ` be the distance between the centers. If r1 ∈ [ `2√n , √̀n ] and r2 ≥ `− ` 4n , then\nvol (B1 ∩B2) ≥ cn vol (B1) , where cn is a dimension-dependent constant that is lower bounded by 14 at n = 1." }, { "heading": "3.2 GAUSSIAN SAMPLING AND LOW RANK STRUCTURE", "text": "A direct application of Lemma 8 seems to imply that uniform sampling of a high-dimensional ball is necessary. Upon further inspection, this can be easily replaced with a much simpler Gaussian sampling procedure that concentrates the mass close to the surface to the ball. This procedure lends itself to better analysis when f(x) admits a latent low-dimensional structure since any affine projection of a Gaussian is still Gaussian.\nLemma 9. Let B1 and B2 be two balls in Rn of radii r1 and r2 respectively. Let ` be the distance between the centers. If r1 ∈ [ `2√n , √̀n ] and r2 ≥ ` − ` n and X = (X1, ..., Xn) are independent Gaussians with mean centered at the center of B1 and variance r21 n , then\nPr[X ∈ B2] > c,\nwhere c is a dimension-independent constant.\nAssume that there exists some rank k projection matrix PA such that f(x) = g(PAx), where k is much smaller than n. Because Gaussians projected on a k-dimensional subspace are still Gaussians, we show that our algorithm has a dimension dependence on k. We let Qg(A) be the condition number of g restricted to the subspace A that drives the dominant changes in f(x).\nTheorem 10. Let f(x) = g(PAx) for some unknown rank k matrix PA with k < n and suppose 3 5Q‖PA(x − x\n∗)‖ ∈ [C1, C2] for some numbers C1, C2 ∈ R+. Then, there exist integers 0 ≤ k1, k2 < log\nC2 C1 such that if r = 2k1C1 or r = 2−k2C2, then a random sample y from a Gaussian\ndistribution N (x, r 2\nk I) satisfies f(y)− f(x∗) ≤ (f(x)− f(x∗)) ( 1− 15kQg(A) )\nwith constant probability.\nNote that the speed-up in progress is due to the fact that we can now tolerate the larger sampling radius of Ω(1/ √ k), while maintaining a high probability of making progress. If k is unknown, we can simply use binary search to find the correct radius with an extra factor of log(n) in our runtime.\nThe low-rank assumption is too restrictive to be realistic; however, our fast rates still hold, at least for the early stages of the optimization, even if we assume that f(x) = g(PAx) + h(x) and |h(x)| ≤ δ is a full-rank function that is bounded by δ. In this setting, we can show that convergence remains fast, at least until the optimality gap approaches δ.\nTheorem 11. Let f(x) = g(PAx) + h(x) for some unknown rank k matrix PA with k < n where g, h are convex and |h| ≤ δ. Suppose 35Q‖PAx − z ∗‖ ∈ [C1, C2] for some numbers C1, C2 ∈ R+ where z∗ minimizes g(z). Then, there exist integers 0 ≤ k1, k2 < log C2C1 such that if r = 2 k1C1 or r = 2−k2C2, then a random sample y from a Gaussian distribution N (x, r 2\nk I) satisfies f(y)− f(x∗) ≤ (f(x)− f(x∗)) ( 1− 110kQg(A) )\nwith constant probability whenever f(x)− f(x∗) ≥ 60δkQg(A)." }, { "heading": "3.3 LOWER BOUNDS", "text": "We show that our upper bounds given in the previous section are tight up to logarithmic factors for any symmetric sampling distribution D. These lower bounds are easily derived from our geometric perspective as we show that a sampling distribution with a large radius gives an extremely low probability of intersection with the desired sub-level set. Therefore, while gradient-approximation algorithms can be accelerated to achieve a runtime that depends on the square-root of the condition number Q, gradient-less methods that rely on random sampling are likely unable to be accelerated according to our lower bound. However, we emphasize that monotone invariance allows these results to apply to a broader class of objective functions, beyond smooth and convex, so the results can be useful in practice despite the seemingly worse theoretical bounds.\nAlgorithm 1: Gradientless Descent with Binary Search (GLD-Search) Input: function: f : Rn → R, T ∈ Z+: number of iterations, x0: starting point,\nD: sampling distribution, R: maximum search radius, r: minimum search radius 1 Set K = log(R/r) 2 for t = 0, . . . , T do 3 Ball Sampling Trial i: 4 for k = 0, . . . , K do 5 Set ri,k = 2−kR. 6 Sample vi,k ∼ ri,kD. 7 end 8 Update: xt+1 = arg mink { f(y)\n∣∣∣ y = xt, y = xt + vi,k} 9 end\n10 return xt\nTheorem 12. Let y = x + v, where v is a random sample from rD for some radius r > 0 and D is standard Gaussian or any rotationally symmetric distribution. Then, there exist a region X with positive measure such that for any x ∈ X ,\nf(y)− f(x∗) ≥ (f(x)− f(x∗)) ( 1− √ 5 log(nQ)\nnQ ) with probability at least 1− 1poly(nQ) ." }, { "heading": "4 GRADIENTLESS ALGORITHMS", "text": "In this section, we present two algorithms that follow the same Gradientless Descent (GLD) template: GLD-Search and GLD-Fast, with the latter being an optimized version of the former when an upper bound on the condition number of a function is known. For both algorithms, since they are monotone-invariant, we appeal to the previous section to derive fast convergence rates for any monotone transform of convex f(x) with good condition number. We show the efficacy of both algorithms experimentally in the Experiments section." }, { "heading": "4.1 GRADIENTLESS DESCENT WITH BINARY SEARCH", "text": "Although the sampling distribution D is fixed, we have a choice of radii for each iteration of the algorithm. We can apply a binary search procedure to ensure progress. The most straightforward version of our algorithm is thus with a naive binary sweep across an interval in [r,R] that is unchanged throughout the algorithm. This allows us to give convergence guarantees without previous knowledge of the condition number at a cost of an extra factor of log(n/ ). Theorem 13. Let x0 be any starting point and f a blackbox function with condition number Q. Running Algorithm 1 with r = √\nn , R = ‖X‖ and D = N (0, I) as a standard Gaussian returns a\npoint xT such that ‖xT −x∗‖ ≤ 2Q3/2 after O(nQ log(n‖X‖/ )2) function evaluations with high probability.\nFurthermore, if f(x) = g(PAx) admits a low-rank structure with PA a rank k matrix, then we only require O(kQg(A) log(n‖X‖/ )2) function evaluations to guarantee ‖PA(xT − x∗)‖ ≤ . This holds analogously even if f(x) = g(PAx) + h(x) is almost low-rank where |h| ≤ δ and > 60δkQg(A)." }, { "heading": "4.2 GRADIENTLESS DESCENT WITH FAST BINARY SEARCH", "text": "GLD-Search (Algorithm 1) uses a naive lower and upper bound for the search radius ‖xt − x∗‖, which incurs an extra factor of log(1/ ) in the runtime bound. In GLD-Fast, we remove this extra factor dependence on log(1/ ) by drastically reducing the range of the binary search. This is done by exploiting the assumption that f has a good condition number upper bound Q̂ and by slowly halfing the diameter of the search space every few iterations since we expect xt → x∗ as t→∞.\nAlgorithm 2: Gradientless Descent with Fast Binary Search (GLD-Fast) Input: function f : Rn → R, T ∈ Z+: number of iterations, x0: starting point,\nD: sampling distribution, R: diameter of search space, Q: condition number bound 1 Set K = log(4 √ Q), H = nQ log(Q) 2 for t = 1, . . . , T do 3 Set R = R/2 when T ≡ 0 mod H (every H iterations). 4 Ball Sampling Trial i: 5 for k = -K, . . . , 0, . . . , K do 6 Set ri,k = 2−kR. 7 Sample vi,k ∼ ri,kD. 8 end 9 Update: xt+1 = arg mini { f(y)\n∣∣∣ y = xt, y = xt + vi} 10 end 11 return xt\nTheorem 14. Let x0 be any starting point and f a blackbox function with condition number upper bounded by Q. Running Algorithm 2 with suitable parameters returns a point xT such that f(xT )− f(x∗) ≤ after O(nQ log2(Q) log(‖X‖/ )) function evaluations with high probability. Furthermore, if f(x) = g(PAx) admits a low-rank structure with PA a rank k matrix, then we only requireO(kQg(A) log(n) log\n2(Qg(A)) log(‖X‖/ )) function evaluations to guarantee ‖PA(xT − x∗)‖ ≤ . This holds analogously even if f(x) = g(PAx) + h(x) is almost low-rank where |h| ≤ δ and > 60δkQg(A)." }, { "heading": "5 EXPERIMENTS", "text": "We tested GLD algorithms on a simple class of objective functions and compare it to Accelerated Random Search (ARS) by Nesterov & Spokoiny (2011), which has linear convergence guarantees on strongly convex and strongly smooth functions. To our knowledge, ARS makes the weakest assumption among the zeroth-order algorithms that have linear convergence guarantees and perform only a constant order of operations per iteration. Our main conclusion is that GLD-Fast is comparable to ARS and tends to achieve a reasonably low error much faster than ARS in high dimensions (≥ 50). In low dimensions, GLD-Search is competitive with GLD-Fast and ARS though it requires no information about the function.\nWe let Hα,β,n ∈ Rn×n be a diagonal matrix with its i-th diagonal equal to α + (β − α) i−1n−1 . In simple words, its diagonal elements form an evenly space sequence of numbers from α to β. Our objective function is then fα,β,n : Rn → R as fα,β,n(x) = 12x\n>Hα,β,nx, which is α-strongly convex and β-strongly smooth. We always use the same starting point x = 1√\nn (1, . . . , 1), which\nrequires ‖X‖ = √ Q for our algorithms. We plot the optimality gap f(bt)− f(x∗) against the number of function evaluations, where bt is the best point observed so far after t evaluations. Although all tested algorithms are stochastic, they have a low variance on the objective functions that we use; hence we average the results over 10 runs and omit the error bars in the plots.\nWe ran experiments on f1,8,n with imperfect curvature information α̂ and β̂ (see Figure 3 in appendix). GLD-Search is independent of the condition number. GLD-Fast takes only one parameter, which is the upper bound on the condition number; if approximation factor is z, then we pass 8z as the upper bound. ARS requires both strong convexity and smoothness parameters. We test three different distributions of the approximation error; when the approximation factor is z, then ARS-alpha gets (α/z, β), ARS-beta gets (α, zβ), and ARS-even gets (α/ √ z, √ zβ) as input. GLD-Fast is more robust and faster than ARS when the condition number is over-approximated. When the condition number is underestimated, GLD-Fast still steadily converges." }, { "heading": "5.1 MONOTONE TRANSFORMATIONS", "text": "In Figure 1, we ran experiments on f1,8,n for different settings of dimensionality n, and its monotone transformation with g(y) = − exp(−√y). For this experiment, we assume a perfect oracle for the strong convexity and smoothness parameters of f . The convergence of GLD is totally unaffected by the monotone transformation. For the low-dimension cases of a transformed function (bottom half of the figure), we note that there are inflection points in the convergence curve of ARS. This means that ARS initially struggles to gain momentum and then struggles to stop the momentum when it gets close to the optimum. Another observation is that unlike ARS that needs to build up momentum, GLD-Fast starts from a large radius and therefore achieves a reasonably low error much faster than ARS, especially in higher dimensions." }, { "heading": "5.2 BBOB BENCHMARKS", "text": "To show that practicality of GLD on practical and non-convex settings, we also test GLD algorithms on a variety of BlackBox Optimization Benchmarking (BBOB) functions (Hansen et al., 2009). For each function, the optima is known and we use the log optimality gap as a measure of competance. Because each function can exhibit varying forms of non-smoothness and convexity, all algorithms are ran with a smoothness constant of 10 and a strong convexity constant of 0.1. All other setup details are same as before, such as using a fixed starting point.\nThe plots, given in Appendix C, underscore the superior performance of GLD algorithms on various BBOB functions, demonstrating that GLD can successfully optimize a diverse set of functions even without explicit knowledge of condition number. We note that BBOB functions are far from convex and smooth, many exhibiting high conditioning, multi-modal valleys, and weak global structure. Due to our radius search produce, our algorithm appears more robust to non-ideal settings with non-convexity and ill conditioning. As expected, we note that GLD-Fast tend to outperform GLDSearch, especially as the dimension increases, matching our theoretical understanding of GLD." }, { "heading": "5.3 MUJOCO CONTROL BENCHMARKS AND AFFINE TRANSFORMATIONS", "text": "We also ran experiments on the Mujoco benchmarks with varying architectures, both linear and nonlinear. This demonstrates the viability of our approach even in the non-convex, high dimensional setting. We note that however, unlike e.g. ES which uses all queries to form a gradient direction, our algorithm removes queries which produce less reward than using the current arg-max, which can be an information handicap. Nevertheless, we see that our algorithm still achieves competitive performance on the maximum reward. We used a horizon of 1000 for all experiments.\nWe further tested the affine invariance of GLD on the policy parameters from using Gaussian ball sampling, under the HalfCheetah benchmark by projecting the state s of the MDP with linear policy to a higher dimensional state Ws, using a matrix multiplication with an orthonormal W . Specifically, in this setting, for a linear policy parametrized by matrix K, the objective function is thus J(KW ) where πK(Ws) = KWs. Note that when projecting into a high dimension, there is a slowdown factor of log dnewdold where dnew, dold are the new high dimension and previous base dimension, respectively, due to the binary search in our algorithm on a higher dimensional space. For our HalfCheetah case, we projected the 17 base dimension to a 200-length dimension, which suggests that the slowdown factor is a factor log 20017 ≈ 3.5. This can be shown in our plots in the appendix (Figure 15)." }, { "heading": "6 CONCLUSION", "text": "We introduced GLD, a robust zeroth-order optimization algorithm that is simple, efficient, and we show strong theoretical convergence bounds via our novel geometric analysis. As demonstrated by our experiments on BBOB and MuJoCo benchmarks, GLD performs very robustly even in the non-convex setting and its monotone and affine invariance properties give theoretical insight on its practical efficiency.\nGLD is very flexible and allows easy modifications. For example, it could use momentum terms to keep moving in the same direction that improved the objective, or sample from adaptively chosen ellipsoids similarly to adaptive gradient methods. (Duchi et al., 2011; McMahan & Streeter, 2010). Just as one may decay or adaptively vary learning rates for gradient descent, one might use a similar change the distribution from which the ball-sampling radii are chosen, perhaps shrinking the minimum radius as the algorithm progresses, or concentrating more probability mass on smaller radii.\nLikewise, GLD could be combined with random restarts or other restart policies developed for gradient descent. Analogously to adaptive per–coordinate learning rates Duchi et al. (2011); McMahan & Streeter (2010), one could adaptively change the shape of the balls being sampled into ellipsoids with various length-scale factors. Arbitrary combinations of the above variants are also possible." }, { "heading": "A PROOFS OF SECTION 3", "text": "Lemma 15. If h has condition number Q, then for all x ∈ X , there is a ball of radius Q−1‖x−x∗‖ that is tangent at x and inside the sublevel set L↓x(h).\nProof. Write h = g ◦ f such that f is α-strongly convex and β-smooth for some β = Qα and g is monotonically increasing. From the smoothness assumption, we have for any s,\nf ( x− 1β∇f(x) + s ) ≤ f(x) + 〈 ∇f(x), s− 1β∇f(x) 〉 + β2\n∥∥∥s− 1β∇f(x)∥∥∥2 = f(x) + β2 ( ‖s‖2 − 1β2 ‖∇f(x)‖ 2 ) .\nConsider the ball B = B(x − 1β∇f(x), 1 β ‖∇f(x)‖). For any y ∈ B, the above inequality implies f(y) ≤ f(x). Hence, when we apply g on both sides, we still have h(y) ≤ h(x) for all y ∈ B. Therefore, B ⊆ L↓h(y).\nBy strong convexity, ‖∇f(x)‖ ≥ α‖x−x∗‖. It follows that the radius ofB is at least αβ ‖x−x ∗‖.\nProof of Lemma 8. Without loss of generality, consider the unit distance case where ` = 1. Furthermore, it suffices to prove for the smallest possible radius r2 = 1− 14n . Since |r1−r2| ≤ ` ≤ r1+r2, the intersectionB1∩B2 is composed of two hyperspherical caps glued end to end. We lower bound vol (B1 ∩B2) by the volume of the cap C1 of B1 that is contained in the intersection. Consider the triangle with sides r1, r2 and `. From classic geometry, the height of C1 is\nc1 = 1 2 ( 1 + r21 − r22 ) > 0. (1)\nThe volume of a spherical cap is Li (2011),\nvol (C1) = 1\n2 vol (B1) I 1− c 2 1\nr21\n( n+ 1\n2 ,\n1 2 ).\nwhere I is the regularized incomplete beta function defined as\nIx(a, b) = ∫ x 0 ta−1(1− t)b−1 dt∫ 1\n0 ta−1(1− t)b−1 dt\nwhere x ∈ [0, 1] and a, b ∈ (0,∞). Note that for any fixed a and b, Ix(a, b) is increasing in x. Hence, in order to obtain a lower bound on vol (C1), we want to lower bound 1− c 2 1\nr21 or equivalently,\nupper bound c 2 1\nr21 .\nWrite r1 = α2√n for some α ∈ [1, 2]. From Eq. (1),\nc1 = 1 4n + α2 8n − 1 32n2 .\nHence, c1 r1 = 1 16 √ n\n( 8\nα + 4α− 1 n ) Since g(α) = 8α + 4α is convex in [1, 2], g(α) ≤ max(g(1), g(2)) = 12. It follows that c1 r1 ≤ 1 16 √ n ( 12− 1n ) ≤ 3 4 √ n . So, 1 − c 2 1 r21 ≥ 1 − 916n . To complete the proof, note that Vn := I1− 916n ( n+1 2 , 1 2 ) is increasing in n, and V1 = 1 4 . As n goes to infinity, this value converges to 1 as B1 ⊂ B2.\nProof of Lemma 7. Let ν = 15nQ . Let q = (1− ν)x+ νx ∗. Let Bq = B(cq, rq) be a ball that has q on its surface, lies inside L↓q , and has radius rq = Q−1‖x−x∗‖. Lemma 15 guarantees its existence. Suppose that vol (Bx ∩Bq) ≥ 14 vol (Bx) (2) and that a random sample y from Bx belongs to Bq , which happens with probability at least 14 . Then, our guarantee follows by\nf(y)− f(x∗) ≤ f(q)− f(x∗) ≤ (1− ν)f(x) + νf(x∗)− f(x∗) ≤ (1− ν) (f(x)− f(x∗))\nwhere the first line follows from Lemma 15 and second line from convexity of f .\nTherefore, it now suffices to prove Eq. 2. To do so, we will apply Lemma 8 after showing that the radius of Bx and Bq are in the proper ranges. Let ` = ‖x− cq‖ and note that\n` ≤ ‖x− q‖+ rq (3) ≤ ν ‖x− x∗‖+ rq = ν ‖x− x∗‖+Q−1 ‖q − x∗‖ ≤ (ν +Q−1(1− ν)) ‖x− x∗‖ (4) ≤ 65Q‖bx − x ∗‖.\nSince x is outside of Bq , we also have\n` ≥ rq = Q−1 ‖q − x∗‖ = Q−1(1− ν) ‖x− x∗‖ ≥ 45Q‖bx − x ∗‖. (5)\nIt follows that `\n2 ≤ 3 5Q ‖bx − x∗‖ ≤ `.\nIn the log2 space, our choice of k1 is equivalent to starting from log2 C1 and sweeping through the range [log2 C1, log2 C2] at the interval of size 1. This is guaranteed to find a point between ` 2 and `, which is also an interval of size 1. Therefore, there exists a k1 satisfying the theorem statement, and similarly, we can prove the existence of k2.\nFinally, it remains to show that rq ≥ (1−1/(4n))`. From Eq. (3), it suffices to show that ‖x−q‖ ≤ ` 4n or equivalently ν ‖x− x ∗‖ ≤ `4n . From Eq. (4),\n‖x− q‖ = ν ‖x− x∗‖ ≤ νQ(1− ν)−1`. For any Q,n ≥ 1, 1− ν ≥ 45 . So,\nνQ(1− ν)−1 = 15n (1− ν) −1 ≤ 14n (6)\nand the proof is complete.\nProof of Lemma 9. Without loss of generality, let ` = 1 and B2 is centered at the origin with radius r2 and B1 is centered at e1 = (1, 0, ..., 0). Then, we simply want to show that\nPr [ (1 +X1) 2 +\nn∑ i=2 X2i ≤ r22\n] > cn\nBy Markov’s inequality, we see that ∑n i=2X 2 i ≤ 2r21 = 2/n with probability at most 1/2. And since X1 is independent and r2 ≥ 1− 1/n, it suffices to show that\nPr [ (1 +X1) 2 ≤ 1− 4/n ] > Ω(1)\nSince X1 has standard deviation at least r1/ √ n ≥ 1/(2n), we see that the probability of deviating at least a few standard deviation below is at least a constant.\nProof of Theorem 10. We can consider the projection of all points onto the column space of A and since the Gaussian sampling process is preserved, our proof follows from applying Theorem 7 restricted onto the k-dimensional subspace and using Lemma 9 in place of Lemma 8.\nProof of Theorem 11. By the boundedness of h, since f(x) − f(x∗) ≥ 60δkQg(A), we see that g(PAx)− g(x∗) ≥ 60δkQg(A)− 2δ > 0. By Lemma 9, we see that if we sample from a Gaussian distribution y ∼ N (x, r 2\nk I), then if z ∗ is the minimum of g(x) restricted to the column space of A,\nthen\ng(PAy)− g(z∗) ≤ (g(PAx)− g(z∗)) (\n1− 1 5kQg(A) ) with constant probability. By boundedness on h, we know that h(y) ≤ h(x) + 2δ. Furthermore, this also implies that g(PAx∗) ≤ g(z∗) + 2δ. Therefore, we know that the decrease is at least\nf(y)− f(x∗) = g(PAy)− g(PAx∗) + h(y)− h(x∗) ≤ g(PAy)− g(z∗) + 2δ\n≤ (g(PAx)− g(z∗)) (\n1− 1 5kQg(A)\n) + 2δ\n≤ (g(PAx)− g(PAx∗) + 2δ) (\n1− 1 5kQg(A)\n) + 2δ\n≤ (f(x)− f(x∗) + 4δ) (\n1− 1 5kQg(A)\n) + 2δ\n≤ (f(x)− f(x∗)) (\n1− 1 5kQg(A)\n) + 6δ\nSince f(x) − f(x∗) ≥ 10δkQg(A), we conclude that (f(x)− f(x∗)) ( 1− 15kQg(A) ) + 6δ ≤\n(f(x)− f(x∗)) ( 1− 110kQg(A) ) and our proof is complete.\nProof of Theorem 12. Our main proof strategy is to show that progress can only be made with a radius size ofO( √ log(nQ)/(nQ)); larger radii cannot find descent directions with high probability. Consider a simple ellipsoid function f(x) = x>Dx, where D is a diagonal matrix and D11 ≤ D22 ≤ ... ≤ Dnn, where WLOG we let D11 = 1 and Dii = Q for i > 1. The optima is x∗ = 0 with f(x∗) = 0. Consider the region X = {x = (x1, x2, ..., xn)|1 ≥ x1 ≥ 0.9, |xi| ≤ 0.1/(Q √ n)}. Then, if we let v ∼ N(0, I) be a standard Gaussian vector, then for some radius r, we see that the probability of finding a descent direction is:\nPr[f(x+ rv) ≤ f(x)] = Pr [ (x1 + rv1)\n2 + ∑ i>1 Dii(xi + rvi) 2 ≤ x21 + ∑ i>1 Diix 2 i\n]\n= Pr [ 2rx1v1 + r\n2v21 +Q ∑ i>1 (2rxivi + r 2v2i ) ≤ 0\n]\n≤ Pr [ 2rx1v1 ≤ −Q\n∑ i>1 2rxivi −Qr2 ∑ i>1 v2i\n]\n= Pr [ v1X1 ≤ −Q\n∑ i>1 xivi − 1 2 Qr ∑ i>1 v2i\n]\nBy standard concentration bounds for sub-exponential variables, we have\nPr [ | 1 n− 1 ∑ i>1 v2i − 1| ≥ t ] ≤ 2e−(n−1)t 2/8\nTherefore, with exponentially high probability, ∑ i>1X 2 i ≥ n/2. Also, since |xi| ≤ 0.1/(Q √ n), Chernoff bounds give:\nPr [∣∣∣∣∣∑ i>1 xivi ∣∣∣∣∣ ≥ t ] ≤ 2e−50(Qt) 2\nTherefore, with probability at least 1− 1/(nQ)3, | ∑ i>1 viXi| ≤ √ log(nQ)/Q.\nIf Qrn ≥ Ω( √ log(nQ)), then we have\n−Q ∑ i>1 viXi − 1 2 Qr ∑ i>1 X2i ≤ −Ω( √ log(nQ))\nWe conclude that the probability of descent is upper bounded by Pr [ v1X1 ≤ −Ω( √ log(nQ)) ] .\nThis probability is exactly Φ(−l), where Φ is the cumulative density of a standard normal and l = Ω( √ log(nQ)). By a naive upper bound, we see that\nΦ(−l) = 1√ 2π ∫ ∞ l e−x 2/2 dx\n≤ C l ∫ ∞ l xe−x 2/2 dx\n= C\nl e−l 2/2\nSince l = Ω( √\nlog(nQ)), we conclude that with probability at least 1 − 1/poly(nQ), we have f(y)− f(x∗) ≥ f(x)− f(x∗).\nOtherwise, we are in the case that Qrn ≤ O( √\nlog(nQ)). Arguing simiarly as before, with high probability, our objective function and each coordinate can change by at mostO( √ log(nQ)/(Qn)).\nNext, we extend our proof to any symmetric distributionD. SinceD is rotationally symmetric, if we parametrize v = (r, θ) is polar-coordinates, then the p.d.f. of any scaling of D must take the form p(v) = pr(r)u(θ), where u(θ) induces the uniform distribution over the unit sphere. Therefore, if Y is a random variable that followsD, then we may write Y = Rv/‖v‖, whereR is a random scalar with p.d.f pr(r) and v is a standard Gaussian vector and R,X are independent. As previously argued, ‖v‖ ∈ [0.5n, 1.5n] with exponentially high probability. Therefore, if R ≥ Ω( √\nlog(nQ)/Q), the same arguments will imply that Y is a descent direction with polynomially small probability. Thus, when Y is a descent direction, it must be that R ≤ Ω( √ log(nQ)/Q) and as argued previously, our lower bound follows similarly." }, { "heading": "B PROOFS OF SECTION 4", "text": "Proof of Theorem 13. By the Gaussian version of Theorem 7 (full rank version of Theorem 10), as long as our binary search sweeps between minimum search radius r ≤ 3\n5Q √ n ‖x − x∗‖ and\nmaximum search radius of the diameter of the whole space R = ‖X‖, the objective value will decrease multiplicatively by 1− 15nQ in each iteration with constant probability. Therefore, if ‖xt− x∗‖ ≥ 2Q and we set r = √\nn and R = ‖X‖, then with high probability, we expect f(xT ) −\nf(x∗) ≤ βQ2 2 after T = O(nQ log(‖X‖/(Q ))) iterations, where we note that F = f(x0) − f(x∗) ≤ β‖X‖2 by smoothness. Otherwise, if there exists some xt such that ‖xt − x∗‖ ≤ 2Q , then f(xT ) − f(x∗) ≤ f(xt) − f(x∗) ≤ 4βQ2 2. Therefore, by strong convexity, we conclude that in either case, ‖xT − x∗‖ ≤ 2Q3/2 . Finally note that each iteration uses a binary search that requires O(log(R/r)) = O(log(n‖X‖/ )) function evaluations. Therefore, by combining these bounds, we derive our result. The low-rank result follows from applying Theorem 10 and Theorem 11 instead.\nProof of Theorem 14. Let H = O(nQ log(Q)) be the number of iterations between successive radius halving and we initialize R = ‖X‖ and half R every H iterations. We call the iterations between two halving steps an epoch. We claim that ‖xi − x0‖ ≤ R for all iterations and proceed with induction on the epoch number. The base case is trivial.\nAssume that ‖xi − x0‖ ≤ R for all iterations in the previous epoch and let iteration is be the start of the epoch and iteration is +H be the end of the epoch. Then, since ‖xis − x∗‖ ≤ R, we see that f(xis) − f(x∗) ≤ βR2 by smoothness. If R4√Q ≤ ‖xi − x ∗‖ ≤ 4 √ QR for all i in the previous epoch, then by the Gaussian version of Theorem 7 (Theorem 10), since we do a binary sweep from R 4Q to 4 √ QR, we can choose D accordingly so that we are guaranteed that our objective value will decrease multiplicatively by 1− 15nQ with constant probability at a cost of O(log(Q)) function evaluations per iteration. This implies that with high probability, after O(nQ log(Q)) iterations, we conclude\nf(xis+H)− f(x∗) ≤ 1\n4Q (f(xis)− f(x∗)) ≤\nα 4 ‖xis − x∗‖2 ≤ α 4 R2\nOtherwise, there exists some 1 ≤ j ≤ H such that ‖xis+j−x∗‖ ≥ 4 √ QR or ‖xis+j−x∗‖ ≤ R4√Q . If it is the former, then by strong convexity, f(xis+j) − f(x∗) ≥ α‖xis+j − x∗‖2 ≥ 2βR2, which contradicts the fact that f(xis)−f(x∗) ≤ βR2 by smoothness. If it is the latter, then by smoothness, we reach the same conclusion:\nf(xis+H)− f(x∗) ≤ f(xis+j)− f(x∗) ≤ β‖xis+j − x∗‖2 ≤ α\n4 R2\nTherefore, by strong convexity, we have\n‖xis+H − x∗‖ ≤ √ f(xis+H)− f(x∗)\nα ≤ R 2\nAnd our induction is complete. Therefore, we conclude that after log(‖X‖/ ) epochs, we have ‖xT − x∗‖ ≤ . Each epoch has H iterations, each with O(log(Q)) function evaluations and so our result follows.\nThe low-rank result follows from applying Theorem 10 and Theorem 11 instead. However, note that since we do not know the latent dimension k, we must extend the binary search to incur an extra log(n) factor in the binary search cost." }, { "heading": "C FIGURES", "text": "C.1 BBOB FUNCTION PLOTS\nFigure 4: Convergence plot for the BBOB Rastrigin Function.\n0 1 2 3 4 5 6 7 8 9 10 Evaluations (Thousands)\n2\n3\n4\nLo g\nOp tim\nal ity\nG ap\nDimension = 5\n0 1 2 3 4 5 6 7 8 9 10 Evaluations (Thousands)\n2\n3\n4\n5 Dimension = 10\n0 1 2 3 4 5 6 7 8 9 10 Evaluations (Thousands)\n2\n3\n4\n5\nDimension = 20\n0 1 2 3 4 5 6 7 8 9 10 Evaluations (Thousands)\n0\n2\n4\n6 Dimension = 40\nARS GLD-Fast GLD-Search\nConvergence Plots for Rastrigin Function\nC.2 MUJOCO CONTROL PLOTS" } ]
2,021
GRADIENTLESS DESCENT: HIGH-DIMENSIONAL ZEROTH-ORDER OPTIMIZATION
SP:9d2476df24b81661dc5ad76b13c8fd5fd1653381
[ "This paper looks at privacy concerns regarding data for a specific model before and after a single update. It discusses the privacy concerns thoroughly and look at language modeling as a representative task. They find that there are plenty of cases namely when the composition of the sequences involve low frequency words, that a lot of information leak occurs.", "This paper studies the privacy issue of widely used neural language models in the current literature. The authors consider the privacy implication phenomena of two model snapshots before and after an update. The updating setting considered in this paper is kind of interesting. However, the contribution of the current paper is not strong enough and there are many unclear experimental settings in the current paper." ]
To continuously improve quality and reflect changes in data, machine learningbased services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In 23rd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Galen Andrew", "Steve Chien", "Nicolas Papernot" ], "title": "Online; accessed 09-Sep-2019", "venue": "TensorFlow Privacy. https://github.com/ tensorflow/privacy,", "year": 2019 }, { "authors": [ "Raef Bassily", "Adam Smith", "Abhradeep Thakurta" ], "title": "Private empirical risk minimization: Efficient algorithms and tight error bounds", "venue": "In 55th IEEE Annual Symposium on Foundations of Computer Science,", "year": 2014 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Jernej Kos", "Úlfar Erlingsson", "Dawn Song" ], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "CoRR, abs/1802.08232,", "year": 2018 }, { "authors": [ "David Cash", "Paul Grubbs", "Jason Perry", "Thomas Ristenpart" ], "title": "Leakage-abuse attacks against searchable encryption", "venue": "In 22nd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "NAACLHLT 2019,", "year": 2019 }, { "authors": [ "Cynthia Dwork", "Aaron Roth" ], "title": "The algorithmic foundations of differential privacy", "venue": "Foundations and Trends in Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Cynthia Dwork", "Moni Naor", "Toniann Pitassi", "Guy N. Rothblum", "Sergey Yekhanin" ], "title": "Pan-private streaming algorithms", "venue": "In Innovations in Computer Science,", "year": 2010 }, { "authors": [ "Matt Fredrikson", "Somesh Jha", "Thomas Ristenpart" ], "title": "Model inversion attacks that exploit confidence information and basic countermeasures", "venue": "In 22nd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2015 }, { "authors": [ "Antonio Ginart", "Melody Y. Guan", "Gregory Valiant", "James Zou" ], "title": "Making AI forget you: Data deletion in machine learning", "venue": "CoRR, abs/1907.05012,", "year": 2019 }, { "authors": [ "Briland Hitaj", "Giuseppe Ateniese", "Fernando Perez-Cruz" ], "title": "Deep models under the GAN: Information leakage from collaborative deep learning", "venue": "ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Ken Lang" ], "title": "NewsWeeder: Learning to filter netnews", "venue": "In 12th International Machine Learning Conference on Machine Learning,", "year": 1995 }, { "authors": [ "Mitchell P. Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of English: The Penn Treebank", "venue": "Computational Linguistics,", "year": 1993 }, { "authors": [ "H. Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Luca Melis", "Congzheng Song", "Emiliano De Cristofaro", "Vitaly Shmatikov" ], "title": "Inference attacks against collaborative learning", "venue": "CoRR, abs/1805.04049,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ahmed Salem", "Apratim Bhattacharyya", "Michael Backes", "Mario Fritz", "Yang Zhang" ], "title": "Updates-leak: Data set inference and reconstruction attacks in online learning", "venue": "CoRR, abs/1904.01067,", "year": 2019 }, { "authors": [ "Ahmed Salem", "Yang Zhang", "Mathias Humbert", "Pascal Berrang", "Mario Fritz", "Michael Backes" ], "title": "ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models", "venue": "In 26th Annual Network and Distributed System Security Symposium,", "year": 2019 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "In 38th IEEE Symposium on Security and Privacy,", "year": 2017 }, { "authors": [ "Congzheng Song", "Vitaly Shmatikov" ], "title": "Auditing data provenance in text-generation models", "venue": "CoRR, abs/1811.00513,", "year": 2018 }, { "authors": [ "S. Song", "K. Chaudhuri", "A.D. Sarwate" ], "title": "Stochastic gradient descent with differentially private updates", "venue": "IEEE Global Conference on Signal and Information Processing,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever", "Oriol Vinyals" ], "title": "Recurrent neural network regularization", "venue": "CoRR, abs/1409.2329,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Over the last few years, deep learning has made sufficient progress to be integrated into intelligent, user-facing systems, which means that machine learning models are now part of the regular software development lifecycle. As part of this move towards concrete products, models are regularly re-trained to improve performance when new (and more) data becomes available, to handle distributional shift as usage patterns change, and to respect user requests for removal of their data.\nIn this work, we show that model updates1 reveal a surprising amount of information about changes in the training data, in part, caused by neural network’s tendency to memorize input data. As a consequence, we can infer fine-grained information about differences in the training data by comparing two trained models even when the change to the data is as small as 0.0001% of the original dataset. This has severe implications for deploying machine learning models trained on user data, some of them counter-intuitive: for example, honoring a request to remove a user’s data from the training corpus can mean that their data becomes exposed by releasing an updated model trained without it. This effect also needs to be considered when using public snapshots of high-capacity models (e.g. BERT (Devlin et al., 2019)) that are then fine-tuned on smaller, private datasets.\nWe study the privacy implications of language model updates, motivated by their frequent deployment on end-user systems (as opposed to cloud services): for instance, smartphones are routinely shipped with (simple) language models to power predictive keyboards. The privacy issues caused by the memorizing behavior of language models have recently been studied by Carlini et al. (2018), who showed that it is sometimes possible to extract out-of-distribution samples inserted into the training data of a model. In contrast, we focus on in-distribution data, but consider the case of having access to two versions of the model. A similar setting has recently been investigated by Salem et al. (2019a) with a focus on fully-connected and convolutional architectures applied to image classification, whereas we focus on natural language.\nWe first introduce our setting and methodology in Section 2, defining the notion of a differential score of token sequences with respect to two models. This score reflects the changes in the probabilities of individual tokens in a sequence. We then show how beam search can find token sequences with high differential score and thus recover information about differences in the training data. Our experiments in Section 3 show that our method works in practice on a number of datasets and model architectures including recurrent neural networks and modern transformer architectures. Specifically, we consider a) a synthetic worst-case scenario where the data used to train two model snapshots differs only in a canary phrase that was inserted multiple times; b) a more realistic scenario where we compare\n1We use the term “model update“ to refer to an update in the parameters of the model, caused for example by a training run on changed data. This is distinct from an update to the model architecture, which changes the number or use of parameters.\na model trained on Reddit comments with one that was trained on the same data augmented with subject-specific conversations. We show that an adversary who can query two model snapshots for predictions can recover the canary phrase in the former scenario, and fragments of discourse from conversations in the latter. Moreover, in order to learn information about such model updates, the adversary does not require any information about the data used for training of the models nor knowledge of model parameters or its architecture.\nFinally, we discuss mitigations such as training with differential privacy in Section 4. While differential privacy grants some level of protection against our attacks, it incurs a substantial decrease in accuracy and a high computational cost." }, { "heading": "2 METHODOLOGY", "text": "" }, { "heading": "2.1 NOTATION", "text": "Let T be a finite set of tokens, T ∗ be the set of finite token sequences, and Dist(T ) denote the set of probability distributions over tokens. A language model M is a function M : T ∗ → Dist(T ), where M(t1 . . . ti−1)(ti) denotes the probability that the model assigns to token ti ∈ T after reading the sequence t1 . . . ti−1 ∈ T ∗. We often write MD to make explicit that a multiset (i.e., a set that can contain multiple occurrences of each element) D ⊆ T ∗ was used to train the language model." }, { "heading": "2.2 ADVERSARY MODEL", "text": "We consider an adversary that has query access to two language models MD, MD′ that were trained on datasets D,D′ respectively (in the following, we use M and M ′ as shorthand for MD and MD′ ). The adversary can query the models with any sequence s ∈ T ∗ and observe the corresponding outputs MD(s),MD′(s) ∈ Dist(T ). The goal of the adversary is to infer information about the difference between the datasets D,D′.\nThis scenario corresponds to the case of language models deployed to client devices, for example in “smart” software keyboards or more advanced applications such as grammar correction." }, { "heading": "2.3 DIFFERENTIAL RANK", "text": "Our goal is to identify the token sequences whose probability differs most between M and M ′, as these are most likely to be related to the differences between D and D′.\nTo capture this notion formally, we define the differential score DS of token sequences, which is simply the sum of the differences of (contextualized) per-token probabilities. We also define a relative variant D̃S based on the relative change in probabilities, which we found to be more robust w.r.t. the “noise” introduced by different random initializations of the models M and M ′. Definition 1. Given two language models M,M ′ and a token sequence t1 . . . tn ∈ T ∗, we define the differential score of a token as the increase in its probability and the relative differential score as the relative increase in its probability. We lift these concepts to token sequences by defining\nDSM ′\nM (t1 . . . tn) = n∑ i=1 M ′(t1 . . . ti−1)(ti)−M(t1 . . . ti−1)(ti) ,\nD̃S M ′\nM (t1 . . . tn) = n∑ i=1 M ′(t1 . . . ti−1)(ti)−M(t1 . . . ti−1)(ti) M(t1 . . . ti−1)(ti) .\nThe differential score of a token sequence is best interpreted relative to that of other token sequences. This motivates ranking sequences according to their differential score. Definition 2. We define the differential rank DR(s) of s ∈ T ∗ as the number of token sequences of length |s| with differential score higher than s.\nDR(s) = ∣∣∣{s′ ∈ T |s| ∣∣∣DSM ′M (s′) > DSM ′M (s)}∣∣∣\nThe lower the rank of s, the more s is exposed by a model update." }, { "heading": "2.4 APPROXIMATING DIFFERENTIAL RANK", "text": "Our goal is to identify the token sequences that are most exposed by a model update, i.e., the sequences with the lowest differential rank (highest differential score). Exact computation of the differential rank for sequences of length n requires exploring a search space of size |T |n. To overcome this exponential blow-up, we propose a heuristic based on beam search.\nAt time step i, a beam search of width k maintains a set of k candidate sequences of length i. Beam search considers all possible k |T | single token extensions of these sequences, computes their differential scores and keeps the k highest-scoring sequences of length i+ 1 among them for the next step. Eventually, the search completes and returns a set S ⊆ Tn. We approximate the differential rank DR(s) of a sequence s by its rank among the sequences in the set S computed by beam search, i.e.\n∣∣∣{s′ ∈ S | DSM ′M (s′) > DSM ′M (s)}∣∣∣. The beam width k governs a trade-off between computational cost and precision of the result. For a sufficiently large width, S = T |s| and the result is the true rank of s. For smaller beam widths, the result is a lower bound on DR(s) as the search may miss sequences with higher differential score than those in S.\nIn experiments, we found that shrinking the beam width as the search progresses speeds the search considerably without compromising on the quality of results. Initially, we use a beam width |T |, which we half at each iteration (i.e., we consider |T | /2 candidate phrases of length two, |T | /4 sequences of length three, . . . )." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "In this section we report on experiments in which we evaluate privacy in language model updates using the methodology described in Section 2. We begin by describing the experimental setup." }, { "heading": "3.1 SETUP", "text": "For our experiments, we consider three datasets of different size and complexity, matched with standard baseline model architectures whose capacity we adapted to the data size. All of our models are implemented in TensorFlow. Note that the random seeds of the models are not fixed, so repeated training runs of a model on an unchanged dataset will yield (slightly) different results. We will release the source code as well as analysis tools used in our experimental evaluation at https://double/blind.\nConcretely, we use the Penn Treebank (Marcus et al., 1993) (PTB) dataset as a representative of low-data scenarios, as the standard training dataset has only around 900 000 tokens and a vocabulary size of 10 000. As corresponding model, we use a two-layer recurrent neural network using LSTM cells with 200-dimensional embeddings and hidden states and no additional regularization (this corresponds to the small configuration of Zaremba et al. (2014)).\nSecond, we use a dataset of Reddit comments with 20 million tokens overall, of which we split off 5% as validation set. We use a vocabulary size of 10 000. As corresponding model, we rely on a one-layer recurrent neural network using an LSTM cell with 512-dimensional hidden states and 160-dimensional embeddings, using dropout on inputs and outputs with a keep rate of 0.9 as regularizer. These parameters were chosen in line with a neural language model suitable for next-word recommendations on resource-bounded mobile devices. We additionally consider a model based on the Transformer architecture (Vaswani et al., 2017) (more concretely, using the BERT (Devlin et al., 2019) codebase) with four layers of six attention heads each with a hidden dimension of 192.\nFinally, we use the Wikitext-103 dataset (Merity et al., 2017) with 103 million training tokens as a representative of a big data regime, using a vocabulary size of 20 000. As model, we employ a two-layer RNN with 512-dimensional LSTM cells and token embedding size 512 and again dropout on inputs and outputs with a keep rate of 0.9 as regularizer. We combined this large dataset with this relatively low-capacity model (at least according to the standards of the state of the art in language modeling) to test if our analysis results still hold on datasets that clearly require more model capacity than is available." }, { "heading": "3.2 PRIVACY ANALYSIS OF MODEL UPDATES USING SYNTHETIC CANARIES", "text": "We first study the privacy implications of model updates in controlled experiments with synthetic data. To this end, we create a number of canary phrases that serve as a proxy for private data (they are grammatically correct and do not appear in the original dataset) and that exhibit a variety of different token frequency characteristics.\nSpecifically, we fix the length of the canary phrase to 5, choose a valid phrase structure (e.g. Subject, Verb, Adverb, Compound Object), and instantiate each placeholder with a token that has the desired frequency in D. We create canaries in which frequencies of tokens are all low (all tokens are from the least frequent quintile of words), mixed (one token from each quintile), increasing from low to high, and decreasing from high to low. As the vocabularies differ between the different datasets, the canaries are dataset-dependent. For example, the mixed phrase across all the datasets is “NASA used deadly carbon devices,” and the all low phrase for PTB is “nurses nervously trusted incompetent graduates.”\nFor a given dataset D and a canary phrase s 6∈ D, we construct a dataset D+k∗s by inserting k copies of s into D. We use the differential score DS and the differential rank DR of canary phrase s to answer a number of research questions on our model/dataset combinations. Note that analyzing removal of specific phrases from the dataset simply requires swapping the role of D and D+k∗s.\nRQ1: What is the effect of the number of canary phrase insertions? We consider different numbers of insertions, adapted to the number of tokens in the training corpus:2\n• For PTB, we consider k ∈ {10, 50, 100} canary insertions (corresponding to 1 canary token in 18K training tokens, 1 in 3.6K, and 1 in 1.8K). • For the Reddit dataset, we use k ∈ {5, 50, 500} (corresponding to 1 in 1M, 1 in 100K, 1 in\n10K). • For the Wikitext-103 data, we use k ∈ {20, 100} (corresponding to 1 in 1M, 1 in 200K).\nTable 1 summarizes all of our experiments. As expected, the differential score of canaries grows monotonously with the number of insertions, for all kinds of canaries and models. More surprisingly, in cells with white background, the canary phrase has the maximum differential score among all token sequences found by our beam search, i.e. it ranks first. This means that the canary phrase can easily be extracted without any prior knowledge about it or the context in which it appears (this is in contrast to the single-model results of Carlini et al. (2018), who assumed a known prefix). The signal for extraction is strong even when the inserted canaries account for only 0.0001% of the tokens in the dataset. This becomes visible in the first row of Table 1 where differential scores approaches 4, which is close to the upper bound of 5 (for 5-token canaries).\nRQ2: What is the effect of token frequency in training data? Comparing the columns of Table 1 can be used to answer this question:\n• Phrases with all low-frequency tokens consistently show the highest differential score. Such phrases rank first even when the model is updated with the smallest number of canary\n2The non-aligned insertion frequencies are due to legacy reasons in our experiments. For the final version we will re-run all experiments with aligned frequencies.\ninsertions, as seen in the first row of Table 1. This means that phrases composed of rare words are more likely to be exposed in model updates than other phrases.\n• Canary phrases that start with a low-frequency token, followed by tokens of increasing or mixed frequency, have higher rank than canaries with all low-frequency tokens, but become exposed for a moderate number of insertions into the dataset, see rows 2 and 3 of Table 1.\n• Canaries composed of tokens with descending frequency are the least susceptible to our analysis and tolerate a higher number of insertions before they become exposed. This is expected, as our beam search is biased towards finding high-scoring prefixes.\nRQ3: What is the effect of knowledge about the canary context? We evaluate the differential score of suffixes of our canary phrases assuming knowledge of a prefix. This gives insight into the extent to which an attacker with background knowledge can extract a secret. To this end we consider a dataset D, a canary phrase s = t1 . . . tn 6∈ D and the augmented dataset D+k∗s. For i = 1, . . . , n we take the prefix t1 . . . ti−1 of the canary phrase and compute the differential score r of the token ti conditional on having read the prefix, i.e. M ′(t1 . . . ti−1)(ti)−M(t1 . . . ti−1)(ti). The relationship between i and r indicates how much knowledge about s is required to expose the remainder of the canary phrase.\nFigure 1 depicts the result of this analysis for canaries with high-to-low and all-low token frequencies on the Reddit dataset. Our results show that, while the differential score of the first token without context is close to zero, the score of subsequent tokens quickly grows for all-low canaries, even with a low number of canary insertions. In contrast, more context is required before observing a change in the differential score of high-to-low canaries, as the model is less influenced by the small number of additional occurrences of frequent tokens. This suggests that, even in cases where we fail to extract the canary without additional knowledge (see RQ1 above), an adversary can still use the differential rank to complete a partially known phrase, or confirm that a canary phrase was used to update the dataset.\nRQ3A: What is the effect of inserting canaries together with additional data? We consider the setting where the model is re-trained on data consisting of the canary phrase and some fresh in-distribution data Dextra along with the original dataset Dorig . Concretely, we first split our Reddit dataset into Dorig and Dextra (such that the latter is 20%, 50%, 100% of the size of Dorig ). Then, we trained a model M on reduced Dorig and a second model M ′ on D = Dorig ∪Dextra and the inserted canaries. The results of this experiment are displayed in Table 2, where we can see that DSM ′\nM does not change significantly. Note that the 0% column is identical to the result from Table 1. In conclusion, canaries can be extracted from the trained model even when they are contained in a substantial larger dataset extension.\nRQ3B: What is the effect of updating the model using a continued training approach? We also consider the setting of continued training, in which an existing model is trained on a new dataset\nTable 2: DSM ′ M of the mixed frequence canary phrase for the Reddit (RNN) model using different update techniques. We use T (R,Dorig) → M to denote that model M was obtained by training on data Dorig starting from model R. Here, R are (fresh) random initial parameters, Dextra is an additional dataset from the same distribution as Dorig , and C is the set of canaries. Re-training refers to training on the data from scratch with a new random initialization, whereas continued training fine-tunes an existing model on fresh data. A white cell background means that the differential rank DR (as approximated by our beam search) of the phrase is 0, grey cell background means that DR is >1000.\nRe-training Continued Training 1 Continued Training 2 Training Method T (R,Dorig ∪Dextra ∪ C)→M ′ T (M,Dextra ∪ C)→M ′ T (M,Dextra ∪ C)→ M̃\nT (M̃,D′extra)→M ′\n|Dextra |/|Dorig | 0% 20% 50% 100% 20% 50% 100% 100%" }, { "heading": "1: 1M 0.23 0.224 0.223 0.229 0.52 0.34 0.46 0.01", "text": "" }, { "heading": "1: 100K 3.04 3.032 3.031 3.038 3.56 3.25 3.27 0.26", "text": "(e.g., pre-trained on a generic corpus and fine-tuned on a more specific dataset). For this, we train a model M on a dataset Dorig to convergence, and then continue training using the union of Dextra and the canaries. We use the same dataset splits as in RQ3A to create the Dorig and Dextra datasets. The results of this experiment are shown in the middle column of Table 2. We observe that in all cases the differential score is higher for continued training than re-training. As expected, the differential score of the canary phrase decreases as additional extra data is used for fine-tuning. This shows that the risk of leaking canary phrases increases when a model is updated using the continued training approach.\nFinally, we also consider a possible mitigation strategy, in which we perform continued training in two stages. For this, we split the dataset into three equal parts Dorig , Dextra and D′extra . We proceed as in the continued training setting above, but add a final step in which we train on another dataset after training on the canaries. This resembles a setting where an attacker does not have access to two consecutive snapshots. The results are on the right column of Table 2, showing that the differential score of the canary phrase drops substantially after the second training stage. Thus, two or multi-stage continued training might mitigate leakage of private data." }, { "heading": "3.3 PRIVACY ANALYSIS OF MODEL UPDATES USING SUBJECT-SPECIFIC CONVERSATIONS", "text": "We now study the privacy implications of model updates in real-world scenarios. As a representative scenario, we compare models trained on the Reddit dataset against models trained on the same data augmented with messages from one of two newsgroups from the 20 Newsgroups dataset (Lang, 1995): a) rec.sport.hockey, containing around 184K tokens, approximately 1% of the original training data; and b) talk.politics.mideast, containing around 430K tokens, approximately 2% of the original training data. For both newsgroups, the number of tokens we insert is significantly larger than in the synthetic experiments of Section 3.2. However, we insert full conversations, many of which are of a general nature and off the topic of the newsgroup.\nRQ4: Do the results on synthetic data extend to real-world data? As opposed to synthetic experiments using canaries, when using real-world data there is no individual token sequence whose rank would serve as a clear indicator of a successful attack. We hence resort to a qualitative evaluation where we inspect the highest-scoring sequences found by beam search. Since the sequences returned by vanilla beam search typically share a common prefix, we alternatively run a group beam search to get a more representative sample: we split the initial |T | one-token sequences into N groups according to their differential score, and run parallel beam searches extending each of the groups independently.\nTable 3 displays the result of our evaluation on Reddit augmented with rec.sport.hockey, i.e., the highest-scoring sequences of length 4 in each group of a D̃S -based group beam search with N = 5 (top row) and N = 1 (bottom row). The exposed sentences are on-topic w.r.t. the newsgroup added, which suggests that the phrases with highest relative differential score are specific to the\nnewsgroup used and that, indeed, data used for the update is exposed. We obtain results of comparable relevance using the talk.politics.mideast newsgroup, which we report in Table 4.\nRQ4A: What is the effect of re-training with additional public data? Similarly to RQ3A in Section 3.2, we consider partitions of the Reddit dataset into datasets Dorig and Dextra of different relative sizes. For each partition, we compare a model M trained on Dorig to a model M ′ trained on Dextra ∪N , where N are all messages from one newsgroup from the 20 Newsgroups dataset. We sample a few representative phrases from group beam searches on all pairs of models and compare their relative differential scores.\nIn all cases, a group beam search can recover subject-specific phrases from newsgroups discussions. Some of the phrases resemble canaries, in that they occur multiple times literally in the datasets (e.g. Center for Policy Research), while others never occur literally but digest recurrent discussions (e.g. Partition of northern Israel). Table 5 shows the relative differential score of these phrases for different partitions. As observed for canaries, scores vary little when additional data is used during re-training.\nRQ4B: What is the effect of continued training? Using the same dataset splits as in RQ4A and similarly to RQ3B in Section 3.2, we compare a model M trained from scratch on Dorig to a model M ′ obtained by continuing training M on Dextra ∪N . The results are shown in Table 5. The last two rows contain phrases found by group beam search on M and a model M ′ obtained from M by continued training, but that have too low a score to be found when M ′ is re-trained\nfrom scratch instead. The converse, i.e. phrases that have low score when continuing training and high score when re-training, seems to occur rarely and less consistently (e.g. Saudi troops surrounded village). For canary-like phrases, the results are in line with those in Table 2, with scores decreasing as more data is used during the fine-tuning stage. For other phrases, the results are not as clear-cut. While fine-tuning a model exclusively on private data yields scores that are significantly higher than when re-training a model from scratch, this effect vanishes as more additional data is used; in some cases continued training yields scores lower than when re-training a model on the same data." }, { "heading": "4 MITIGATION VIA DIFFERENTIAL PRIVACY", "text": "In this section we explore how differential privacy can be used to mitigate the information leakage induced by a model update. Differential privacy (DP) (Dwork & Roth, 2014) provides strong guarantees on the amount of information leaked by a released output. Given a computation over records it guarantees limits on the the effect that any input record can have on the output. Formally, F is a ( , δ)-differentially-private computation if for any datasets D and D′ that differ in one record and for any subset O of possible outputs of F we have\nPr(F (D) ∈ O) ≤ exp( ) · Pr(F (D′) ∈ O) + δ . At a high level, differential privacy can be enforced in gradient-based optimization computations (Abadi et al., 2016; Song et al., 2013; Bassily et al., 2014) by clipping the gradient of every record in a batch according to some bound L, then adding noise proportional to L to the sum of the clipped gradients, averaging over the batch size and using this noisy average gradient update during backpropagation.\nDifferential privacy is a natural candidate for defending against membership-like inferences about the input data. The exact application of differential privacy for protecting the information in the model update depends on what one wishes to protect w.r.t. the new data: individual sentences in the new data or all information present in the update. For the former, sequence-level privacy can suffice while for the latter group DP can serve as a mitigation technique where the size of the group is proportional to the number of sequences in the update. Recall that an -DP algorithm F is k differentially private for groups of size k (Dwork & Roth, 2014).\nRQ5: Does DP protect against phrase extraction based on differential ranks? We evaluate the extent to which DP mitigates attacks considered in this paper by training models on the Penn Treebank dataset with canaries with sequence-level differential privacy. We train DP models using the TensorFlow Privacy library (Andrew et al., 2019) for two sets of ( , δ) parameters (5, 1× 10−5) and (111, 1× 10−5) for two datasets: PTB and PTB with 50 insertions of the all-low-frequency canary. We rely on (Andrew et al., 2019) to train models with differentially private stochastic gradient descent using Gaussian noise mechanism and to compute the overall privacy loss of the training phase.\nAs expected, the performance of models trained with DP degrades, in our case from ≈23% accuracy in predicting the next token on the validation dataset to 11.89% and 13.34% for values of 5 and\n111, respectively. While the beam search with the parameters used in Section 3.2 does not return the canary phrase for the DP-trained models anymore, we note that the models have degraded so far that they are essentially only predicting the most common words from each class (“is” when a verb is required, . . . ) and thus, the result is unsurprising. We note that the guarantees of sequence-level DP formally do not apply for the case where canary phrases are inserted as multiple sequences, and that values for our models are high. However, the -analysis is an upper bound and similar observations about the effectiveness of training with DP with high were reported by Carlini et al. (2018).\nWe further investigate the effect of DP training on the differential rank of a canary phrase that was inserted 50 times. Instead of using our beam search method to approximate the differential rank, we fully explore the space of subsequences of length two, and find that the DR for the two-token prefix of our canary phrase dropped from 0 to 9 458 399 and 849 685 for the models with = 5 and = 111 respectively. In addition, we compare the differential score of the whole phrase and observe that it drops from 3.94 for the original model to 4.5× 10−4 and 2.1× 10−3 for models with = 5 and = 111 respectively.\nThough our experiment results validate that DP can mitigate the particular attack method considered in this paper for canary phrases, the model degradation is significant. In addition, the computational overhead of per-sequence gradient clipping is substantial, making it unsuitable for training highcapacity neural language models on large datasets." }, { "heading": "5 RELATED WORK", "text": "In recent years several works have identified how machine learning models leak information about private training data. Membership attacks introduced by Shokri et al. (2017) show that one can identify whether a record belongs to the training dataset of a classification model given black-box access to the model and shadow models trained on data from a similar distribution. Salem et al. (2019b) demonstrate that similar attacks are effective under weaker adversary models. Attribute inference attacks (Fredrikson et al., 2015), which leak the value of sensitive attributes of training records, have been shown successful for regression and classification models. In the distributed learning setting, Hitaj et al. (2017) and Melis et al. (2018) demonstrate that individual gradient updates to a model can reveal features specific to one’s private dataset.\nCarlini et al. (2018) is closest to our work, as it also considers information leakage of language models. The authors assess the risk of (unintended) memorization of rare sequences in the training data by introducing an exposure metric. They show that exposure values can be used to retrieve canaries inserted into training data from a character-level language model. The key differences to our approach are that 1) we consider an adversary that has access to two snapshots of a model, and 2) our canaries are grammatically correct sentences (i.e., follow the distribution of the data) whereas Carlini et al. (2018) add a random sequence of numbers in a fixed context (e.g., “The random number is ...”) into a dataset of financial news articles, where such phrases are rare. We instead consider the scenario of extracting canaries without any context, even if the canary token frequency in the training dataset is as low as one in a million, and for canary phrases that are more similar to the training data.\nSong & Shmatikov (2018) also study sequence-to-sequence language models and show how a user can check if their data has been used for training. In their setting, an auditor needs an auxiliary dataset to train shadow models with the same algorithm as the target model and queries the target model for predictions on a sample of the user’s data. The auxiliary dataset does not need to be drawn from the same distribution as the original training data (unlike Shokri et al. (2017)) and the auditor only observes a list of several top-ranked tokens. In contrast, our approach requires no auxiliary dataset, but assumes access to the probability distributions over all tokens from two different model snapshots. From this, we are able to recover full sequences from the differences in training data rather than binary information about data presence. Like them, we find that sequences with infrequent tokens provide a stronger signal to the adversary/auditor.\nSalem et al. (2019a) consider reconstruction of training data that was used to update a model. While their goal is similar to ours, their adversarial model and setup differ: 1) similar to Song & Shmatikov (2018); Shokri et al. (2017) their attacker uses shadow models trained on auxiliary data drawn from the same distribution as the target training dataset, while in our setting the attacker has no prior knowledge of this distribution and does not need auxiliary data; 2) the updated model is obtained by\nfine-tuning the target model with additional data rather than re-training it from scratch on the changed dataset; 3) the focus is on classification models and not on (generative) language models.\nInformation leakage from updates has also been considered in the setting of searchable encryption. An attacker who has control over data in an update to an encrypted database can learn information about the content of the database as well as previous encrypted searches on it (Cash et al., 2015). Pan-privacy (Dwork et al., 2010), on the other hand, studies the problem of maintaining differential privacy when an attacker observes snapshots of the internal state of a differentially-private algorithm between data updates.\nIn terms of defenses, McMahan et al. (2018) study how to train LSTM models with differential privacy guarantees at a user-level. They investigate utility and privacy trade-offs of the trained models depending on a range of parameters (e.g., clipping bound and batch size). Carlini et al. (2018) show that differential privacy protects against leakage of canaries in character-level models, while Song & Shmatikov (2018) show that an audit as described above fails when training language models with user-level differential privacy using the techniques of McMahan et al. (2018). Ginart et al. (2019) define deletion of a training data point from a model as an stochastic operation returning the same distribution as re-training from scratch without that point, and develop deletion algorithms for k-means clustering with low amortized cost. Publishing snapshots of a model before and after a deletion matches our adversarial model and our results apply." }, { "heading": "6 CONCLUSION", "text": "As far as we know, this article presents the first systematic study of the privacy implications of releasing snapshots of a language model trained on overlapping data. We believe this is a realistic threat that needs to be considered in the lifecycle of machine learning applications. We aim to encourage the research community to work towards quantifying and reducing the exposure caused by model updates, and hope to make practitioners aware of the privacy implications of deploying high-capacity language models as well as their updates." } ]
2,019
null
SP:044d99499c4a9cb383f5e39a28fc7ccb700040d1
[ "The paper proposes an ensemble method for reinforcement learning in which the policy updates are modulated with a loss which encourages diversity among all experienced policies. It is a combination of SAC, normalizing flow policies, and an approach to diversity considered by Hong et al. (2018). The work seems rather incremental and the experiments have some methodological flaws. Specifically the main results (Fig. 4) are based on a comparison between 4 different codebases which makes it impossible to make meaningful conclusions as pointed out e.g. by [1]. The authors mention that their work is built on the work of Hong et al. (2018) yet the comparisons do not seem to include it as a baseline. I'm also concerned about how exactly are environment steps counted: in Algorithm 1 on line 27, it seems that the fitness which is used for training is evaluated by interacting with the environment yet these interactions are not counted towards total_step.", "RL in environments with deceptive rewards can produce sub-optimal policies. To remedy this, the paper proposes a method for population-based exploration. Multiple actors, each parameterized with policies based on Normalizing Flows (radial contractions), are optimized over iterations using the off-policy SAC algorithm. To encourage diverse-exploration as well as high-performance, the SAC-policy-gradient is supplemented with gradient of an “attraction” or “repulsion” term, as defined using the KL-divergence of current policy to another policy from an online archive. When applying the KL-gradient, the authors find it crucial to only update the flow layers, and not the base Gaussian policy." ]
In reinforcement learning, robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions. One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space (a form of exploration), yet learning a population with the “best” coverage is still an open problem. In this work, we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies. Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work, including Soft-Actor Critic (SAC).
[]
[ { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Journal of Mathematics and Mechanics,", "year": 1957 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International conference on machine learning (ICML),", "year": 2016 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Deepti Gupta", "Shabina Ghafir" ], "title": "An overview of methods maintaining diversity in genetic algorithms", "venue": "International journal of emerging technology and advanced engineering,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft Actor-Critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Peter Henderson", "Thang Doan", "Riashat Islam", "David Meger" ], "title": "Bayesian policy gradients via alpha divergence dropout inference", "venue": "NIPS Bayesian Deep Learning Workshop,", "year": 2017 }, { "authors": [ "Zhang-Wei Hong", "Tzu-Yun Shann", "Shih-Yang Su", "Yi-Hsiang Chang", "Tsu-Jui Fu", "Chun-Yi Lee" ], "title": "Diversity-driven exploration strategy for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Jeffrey Horn", "Nicholas Nafpliotis", "David E. Goldberg" ], "title": "A niched pareto genetic algorithm for multiobjective optimization", "venue": "In Proceedings of the 1st IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence,", "year": 1994 }, { "authors": [ "Shauharda Khadka", "Kagan Tumer" ], "title": "Evolution-guided policy gradient in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Shauharda Khadka", "Somdeb Majumdar", "Tarek Nassar", "Zach Dwiel", "Evren Tumer", "Santiago Miret", "Yinyin Liu", "Kagan Tumer" ], "title": "Collaborative evolutionary reinforcement learning", "venue": "CoRR, abs/1905.00976,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Kyowoon Lee", "Sol-A Kim", "Jaesik Choi", "Seong-Whan Lee" ], "title": "Deep reinforcement learning in continuous action spaces: a case study in the game of simulated curling", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yang Liu", "Prajit Ramachandran", "Qiang Liu", "Jian Peng" ], "title": "Stein variational policy gradient", "venue": "In Conference on Uncertainty in Artificla Intelligence (UAI),", "year": 2017 }, { "authors": [ "Samir W Mahfoud" ], "title": "Niching methods for genetic algorithms", "venue": "PhD thesis,", "year": 1995 }, { "authors": [ "Michael L Mauldin" ], "title": "Maintaining diversity in genetic search", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 1984 }, { "authors": [ "Bogdan Mazoure", "Thang Doan", "Audrey Durand", "R Devon Hjelm", "Joelle Pineau" ], "title": "Leveraging exploration in off-policy algorithms via normalizing flows", "venue": "Proceedings of the 3rd Conference on Robot Learning (CoRL", "year": 2019 }, { "authors": [ "Joelle Pineau" ], "title": "The machine learning reproducibility checklist", "venue": null, "year": 2018 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y. Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Aloïs Pourchot", "Olivier Sigaud" ], "title": "CEM-RL: Combining evolutionary and gradient-based methods for policy search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2014 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "Exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2017 }, { "authors": [ "Yunhao Tang", "Shipra Agrawal" ], "title": "Boosting trust region policy optimization by normalizing flows policy", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Ahmed Touati", "Harsh Satija", "Joshua Romoff", "Joelle Pineau", "Pascal Vincent" ], "title": "Randomized value functions via multiplicative normalizing flows", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Brian D Ziebart" ], "title": "Modeling purposeful adaptive behavior with the principle of maximum causal entropy", "venue": "PhD thesis, figshare,", "year": 2010 } ]
[ { "heading": null, "text": "In reinforcement learning, robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions. One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space (a form of exploration), yet learning a population with the “best” coverage is still an open problem. In this work, we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies. Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work, including Soft-Actor Critic (SAC)." }, { "heading": "1 INTRODUCTION", "text": "Many important reinforcement learning (RL) tasks, such as those in robotics and self-driving cars, are challenging due to large action and state spaces (Lee et al., 2018). In particular, environments with large continuous action spaces are prone to deceptive rewards, i.e. fall into local optima in learning (Conti et al., 2018). Applying traditional policy optimization algorithms to these domains often leads to locally optimal, yet globally sub-optimal policies. The agent should then explore the reward landscape more thoroughly in order to avoid falling into these local optima.\nNot all RL domains that require exploration are suitable for understanding how to train agents that are robust to deceptive rewards. For example, Montezuma’s Revenge, a game in the Atari Learning Environment (Bellemare et al., 2013), has sparse rewards; algorithms that perform the best on this task encourage exploration by providing a denser intrinsic reward to the agent to encourage exploration (Tang et al., 2017). On the other hand, many robotic control problems, such as those found in MuJoCo (Todorov et al., 2012), provide the agent with a dense reward signal, yet their high-dimensional action spaces induce a multimodal, often deceptive, reward landscape. For example, in the biped environments, coordinating both arms and legs is crucial for performing well on even simple tasks such as forward motion. However, simply learning to maximize the reward can be detrimental across training: agents will tend to run and fall further away from the start point rather than discovering stable and efficient walking motion. In this setting, exploration serves to provide a more reliable learning signal for the agent by covering more different types of actions during learning.\nOne way to maximize action space coverage is the maximum entropy RL framework (Ziebart, 2010), which prevents variance collapse by adding a policy entropy auxiliary objective. One such prominent algorithm, Soft Actor-Critic (SAC,Haarnoja et al. (2018)), has been shown to excel in large continuous action spaces. To further improve on exploration properties of SAC, one can maintain a population of agents that cover non-identical sections of the policy space. To prevent premature convergence, a diversity-preserving mechanism is typically put in place; balancing the objective and the diversity term becomes key to converging to a global optimum (Hong et al., 2018). This paper studies a particular family of population-based exploration methods, which conduct coordinated local search in the policy space. Prior work on population-based strategies improves performance on robotic control domains through stochastic perturbation on a single actor’s parameter (Pourchot & Sigaud, 2019) or a set of actor’s parameters (Conti et al., 2018; Khadka & Tumer, 2018; Liu et al., 2017). We hypothesize that exploring directly in the policy space will be more effective than perturbing\nthe parameters of the policy, as the latter does not guarantee diversity (i.e., different neural network parameterizations can approximately represent the same function).\nGiven a population of RL agents, we enforce local exploration using an Attraction-Repulsion (AR) mechanism. The later consists in adding an auxiliary loss to encourage pairwise attraction or repulsion between members of a population, as measured by a divergence term. We make use of the KullbackLeibler (KL) divergence because of its desirable statistical properties and its easiness of computation. However, naively maximizing the KL term between two Gaussian policies can be detrimental (e.g. drives both means apart). Because of this, we parametrize the policy with a general family of distributions called Normalizing Flows (NFs, Rezende & Mohamed, 2015); this modification allows to improve upon AR+Gaussian (see Appendix Figure 6). NFs are shown to improve the expressivity of the policies using invertible mappings while maintaining entropy guarantees (Mazoure et al., 2019; Tang & Agrawal, 2018). Nonlinear density estimators have also been previously used for deep RL problems in contexts of distributional RL (Doan et al., 2018) and reward shaping (Tang et al., 2017). The AR objective blends particularly well with SAC, since computing the KL requires stochastic policies with tractable densities for each agent." }, { "heading": "2 PRELIMINARIES", "text": "We first formalize the RL setting in a Markov decision process (MDP). A discrete-time, finite-horizon, MDP (Bellman, 1957; Puterman, 2014) is described by a state space S , an action spaceA, a transition function P : S ×A× S 7→ R+, and a reward function r : S ×A 7→ R.1 On each round t, an agent interacting with this MDP observes the current state st ∈ S , selects an action at ∈ A, and observes a reward r(st, at) ∈ R upon transitioning to a new state st+1 ∼ P(st, at). Let γ ∈ [0, 1] be a discount factor. The goal of an agent evolving in a discounted MDP is to learn a policy π : S × A 7→ [0, 1] such as taking action at ∼ π(·|st) would maximize the expected sum of discounted returns,\nV π(s) = Eπ [ ∞∑ t=0 γtr(st, at)|s0 = s ] .\nIn the following, we use ρπ to denote the trajectory distribution induced by following policy π. If S or A are vector spaces, action and space vectors are respectively denoted by a and s." }, { "heading": "2.1 DISCOVERING NEW SOLUTIONS THROUGH POPULATION-BASED ATTRACTION-REPULSION", "text": "Consider evolving a population of M agents, also called individuals, {πθm}Mm=1, each agent corresponding to a policy with its own parameters. In order to discover new solutions, we aim to generate agents that can mimic some target policy while following a path different from those of other policies.\nLet G denote an archive of policies encountered in previous generations of the population. A natural way of enforcing π to be different from or similar to the policies contained in G is by augmenting the loss of the agent with an Attraction-Repulsion (AR) term:\nLAR = − E π′∼G\n[ βπ′DKL[π||π′] ] , (1)\nwhere π′ is an archived policy and βπ′ is a coefficient weighting the relative importance of the Kullback-Leibler (KL) divergence between π and π′, which we will choose to be a function of the average reward (see Sec. 3.2 below). Intuitively, Eq. 1 adds to the agent objective a weighted average distance between the current and the archived policies. For βπ′ ≥ 0, the agent tends to move away from the archived policy’s behavior (i.e. repulsion, see Figure 1) a). On the other hand, βπ′ < 0 encourages the agent π to imitate π′ (i.e. attraction).\nRequirements for AR In order for agents within a population to be trained using the proposed AR-based loss (Eq. 1), we have the following requirements:\n1. Their policies should be stochastic, so that the KL-divergence between two policies is well-defined.\n1A and S can be either discrete or continuous.\n2. Their policies should have tractable distributions, so that the KL-divergence can be computed easily, either with closed-form solution or Monte Carlo estimation.\nSeveral RL algorithms enjoy such properties (Haarnoja et al., 2018; Schulman et al., 2015; 2017). In particular, the soft actor-critic (SAC, Haarnoja et al., 2018) is a straightforward choice, as it currently outperforms other candidates and is off-policy, thus maintains a single critic shared among all agents (instead of one critic per agent), which reduces computation costs." }, { "heading": "2.2 SOFT ACTOR-CRITIC", "text": "SAC (Haarnoja et al., 2018) is an off-policy learning algorithm which finds the information projection of the Boltzmann Q-function onto the set of diagonal Gaussian policies Π:\nπ = arg min π′∈Π DKL\n( π′(.|st) ∥∥∥∥exp ( 1αQπold(st, .))Zπold(st) ) ,\nwhere α ∈ (0, 1) controls the temperature, i.e. the peakedness of the distribution. The policy π, critic Q, and value function V are optimized according to the following loss functions:\nLπ,SAC = Est∼B[Eat∼π[α log π(at|st)−Q(st,at)]] (2) LQ = E\n(s,a,r,s′)∼B\n[ {Q(s, a)− (r + γV πν (s′))}2 ] (3)\nLV = Est∼D [ 1\n2\n{ V πν (st)− Eat∼π[Q(st,at)− α log π(at|st)] }2] , (4)\nwhere B is the replay buffer. The policy used in SAC as introduced in Haarnoja et al. (2018) is Gaussian, which is both stochastic and tractable, thus compatible with our AR loss function in Eq. 1. Together with the AR loss in Eq. 1, the final policy loss becomes:\nLπ = Lπ,SAC + LAR (5)\nHowever, Gaussian policies are arguably of limited expressibility; we can improve on the family of policy distributions without sacrificing qualities necessary for AR or SAC by using Normalizing Flows (NFs, Rezende & Mohamed, 2015)." }, { "heading": "2.3 NORMALIZING FLOWS", "text": "NFs (Rezende & Mohamed, 2015) were introduced as a means of transforming simple distributions into more complex distributions using learnable and invertible functions. Given a random variable z0 with density q0, they define a set of differentiable and invertible functions, {fi}Ni=1, which generate a sequence of d-dimensional random variables, {zi}Ni=1. Because SAC uses explicit, yet simple parametric policies, NFs can be used to transform the SAC policy into a richer one (e.g., multimodal) without risk loss of information. For example, Mazoure et al. (2019) enhanced SAC using a family of radial contractions around a point z0 ∈ Rd,\nf(z) = z + β\nα+ ||z− z0||2 (z− z0) (6)\nfor α ∈ R+ and β ∈ R. This results in a rich set of policies comprised of an initial noise sample a0, a state-noise embedding hθ(a0, st), and a flow {fφi}Ni=1 of arbitrary length N , parameterized by φ = {φi}Ni=1. Sampling from the policy πφ,θ(at|st) can be described by the following set of equations:\na0 ∼ N (0, I); z = hθ(a0, st);\nat = fφN ◦ fφN−1 ◦ ... ◦ fφ1(z), (7)\nwhere hθ = a0σI + µ(st) depends on the state and the noise variance σ > 0. Different SAC policies can thus be crafted by parameterizing their NFs layers." }, { "heading": "3 ARAC: ATTRACTION-REPULSION ACTOR-CRITIC", "text": "We now detail the general procedure for training a population of agents using the proposed diversityseeking AR mechanism. More specifically, we consider here SAC agents enhanced with NFs (Mazoure et al., 2019). Figure 1 displays the general flow of the procedure. Algorithm 1 (Appendix) provides the pseudo-code of the proposed ARAC strategy, where sub-procedures for rollout and archive update can be found in the Appendix.\nOverview ARAC works by evolving a population of M SAC agents {πmφ,θ}Mm=1 with radial NFs policies (Eq. 7) and shared critic Qω, and by maintaining an archive of policies encountered in previous generations of the population. After performing T steps per agent on the environment (Alg. 1 L8-12), individuals are evaluated by performing R rollouts2 on the environment (Alg. 1 L26-28). This allows to identify the top-K best agents (Alg. 1 L29), also called elites, which will be used to update the critic as they provide the most meaningful feedback (Alg. 1 L13-17). The archive is finally updated in a diversity-seeking fashion using the current population (Alg. 1 L30).\nThe core component of the proposed approach lies within the update of the agents (Alg. 1 L18-25). During this phase, elite individuals are updated using AR operations w.r.t. policies sampled from the archive (Eq. 5), whereas non-elites are updated regularly (Eq. 2)." }, { "heading": "3.1 ENHANCING DIVERSITY IN THE ARCHIVE", "text": "Throughout the training process, we maintain an archive G of maximum capacity G, which contains some previously encountered policies. The process goes as follow: until reaching full capacity, the archive saves a copy of the parameters of every individual in the population after the evaluation step. However, by naively adding all individuals as if the archive were just a heap, the archive could end up filled with policies leading to similar rewards, which would result in a loss of diversity (Mauldin, 1984). We mitigate this issue by keeping track of two fitness clusters (low and high) using the partition formed by running a k-means algorithm on the fitness value. Hence, when |G| = G is reached and a new individual is added to the archive, it randomly replaces an archived policy from its respective cluster. This approach, also known as niching, has proved itself effective at maintaining high diversity levels (Gupta & Ghafir, 2012; Mahfoud, 1995)." }, { "heading": "3.2 DISCOVERING NEW POLICIES THROUGH ATTRACTION-REPULSION", "text": "The crux of this work lies in the explicit search for diversity in the policy space achieved using the AR mechanism. Since the KL between two base policies (i.e. input of the first flow layer) can be trivially maximized by driving their means apart, we apply attraction-repulsion only on the flow layers, while holding the mean of the base policy constant. This ensures that the KL term doesn’t depend on the difference in means and hence controls the magnitude of the AR mechanism. Every time the AR operator is applied (Alg. 1 L20-21), n policies are sampled from the archive and are used for estimating the AR loss (Eq. 1). As in Hong et al. (2018), we consider two possible strategies\n2These steps can be performed in parallel.\nto dictate the value of βπ′ coefficients for policies π′ ∼ G: βπ′ = − [ 2 ( f(π′)− fmin fmax − fmin − 1 )]\n(proactive) (8)\nβπ′ = 1− f(π′)− fmin fmax − fmin\n(reactive) (9)\nwhere f(π)3 represents the fitness function of policy π (average reward in our case), and fmin and fmax are estimated based on the n sampled archived policies. The proactive strategy aims to mimic high reward archived policies, while the reactive strategy is more cautious, only repulsing away the current policy from low fitness archived policies. Using this approach, the current agent policy will be attracted to some sampled policies (βπ′ < 0) and will be repulsed from others (βπ′ ≥ 0) in a more or less aggressive way, depending on the strategy.\nUnlike Hong et al. (2018) who applied proactive and reactive strategies on policies up to 5 timesteps back, we maintain an archive consisting of two clusters seen so far: policies with low and high fitness, respectively. Having this cluster allows to attract/repulse from a set of diverse agents, replacing high-reward policies by policies with similar performance. Indeed, without this process, elements of the archive would collapse on the most frequent policy, from which all agents would attract/repulse. To avoid performing AR against a single \"average policy\" , we separate low-reward and high-reward agents via clustering." }, { "heading": "4 RELATED WORK", "text": "The challenges of exploration are well studied in the RL literature. Previously proposed approaches for overcoming hard exploration domains tend to either increase the capacity of the state-action value function (Gal & Ghahramani, 2016; Henderson et al., 2017) or the policy expressivity (Mazoure et al., 2019; Tang & Agrawal, 2018; Touati et al., 2018). This work rather tackles exploration from a diverse multi-agent perspective. Unlike prior population-based approaches for exploration (Conti et al., 2018; Khadka & Tumer, 2018; Pourchot & Sigaud, 2019), which seek diversity through the parameters space, we directly promote diversity in the policy space.\nThe current work was inspired by Hong et al. (2018), who relied on the KL divergence to attract/repulse from a set of previous policies to discover new solutions. However, in their work, the archive is time-based (they restrict themselves to the 5 most recent policies), while our archive is built following a diversity-seeking strategy (i.e., niching and policies come from multiple agents). Notably, ARAC is different of previously discussed works in that it explores the action space in multiple regions simultaneously, a property enforced through the AR mechanism.\nThe proposed approach bears some resemblance with Liu et al. (2017), who took advantage of a multi-agent framework in order to perform repulsion operations among agents using of similarity kernels between parameters of the agents. The AR mechanism gives rise to exploration through structured policy rather than randomized policy. This strategy has also been employed in multi-task learning (Gupta et al., 2018), where experience on previous tasks was used to explore on new tasks." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DIDACTIC EXAMPLE", "text": "Consider a 2-dimensional multi-armed bandit problem where the actions lie in the real square [−6, 6]2. We illustrate the example of using a proactive strategy where a SAC agent with radial flows policy imitates a desirable (expert) policy while simultaneously repelling from a less desirable policy. The task consists in matching the expert’s policy (blue density) while avoiding taking actions from a repulsive policy π′ (red). We illustrate the properties of radial flows in Figure 2 by increasing the number of flows (where 0 flow corresponds to a Gaussian distribution).\nWe observe that increasing the number of flows (bottom to top) leads to more complex policy’s shapes and multimodality unlike the Gaussian policy which has its variance shrinked (the KL divergence\n3We overload our notation f for both the normalizing flow and the fitness depending on the context\nis proportional to the ratio of the two variances, hence maximizing it can lead to a reduction in the variance which can be detrimental for exploration purpose). Details are provided in Appendix." }, { "heading": "5.2 MUJOCO LOCOMOTION BENCHMARKS", "text": "We now compare ARAC against the CEM-TD3 (Pourchot & Sigaud, 2019), ERL (Khadka & Tumer, 2018) and CERL (Khadka et al., 2019) multi-agent baselines on seven continuous control tasks from the MuJoco suite (Duan et al., 2016): Ant-v2, HalfCheetah-v2, Humanoid-v2, HumanoidStandup-v2, Hopper-v2, Walker2d-v2 and Humanoid (rllab). We also designed a sparse reward environment SparseHumanoid-v2. All algorithms are run over 1M time steps on each environment, except Humanoid (rllab) which gets 2M time steps and SparseHumanoid-v2 on 0.6M time steps. We also include comparison against single-agent baselines.\nARAC performs R = 10 rollouts for evaluation steps every 10, 000 interaction steps with the environment. We consider a small population of N = 5 individuals with K = 2 as elites. Every SAC agent has one feedforward hidden layer of 256 units acting as state embedding, followed by a radial flow of length ∈ {3, 4}. A temperature of α = 0.05 or 0.2 is used across all the environments (See appendix for more details). AR operations are carried out by sampling uniformly n = 5 archived\npolicies from G. Parameters details are provided in the Appendix (Table 4). All networks are trained with Adam optimizer (Kingma & Ba, 2015) using a learning rate of 3E−4. Baselines CEM-TD34, ERL5, CERL6 use the code contained in their respective repositories.\nFigure 4 displays the performance of all algorithms on three environments over time steps (see Appendix Figure 7 for all environments). Results are averaged over 5 random seeds. Table 1 reports the best observed reward for each method.\nSmall state space environments HalfCheetah-v2, Hopper-v2, and Walker2d-v2 are low-dimensional state space environments (d ≤ 17). Except for HalfCheetah-v2, the proposed approach shows comparable results with its concurrent. Those results match the findings of (Plappert et al., 2018) that some environments with well-structured dynamics require little exploration. Full learning curves can be found in the Appendix.\nDeceptive reward and Large state space environments Humanoid-v2, HumanoidStandup-v2 and Humanoid (rllab) belong to bipedal environments with high-dimensional state space (d = 376 and d = 147), and are known to trap algorithms into suboptimal solutions. In addition to the legs, the agent also needs to control the arms, which may influence the walking way and hence induce deceptive rewards (Conti et al., 2018). Figure 4 shows the learning curves on MuJoCo tasks. We observe that ARAC beats both baselines in performance as well as in convergence rate.\nAnt-v2 is another high-dimensional state space environment (d ≥ 100). In an unstable setup, a naive algorithm implementing an unbalanced fast walk could still generate high reward, the reward\n4https://github.com/apourchot/CEM-RL 5https://github.com/ShawK91/erl_paper_nips18 6https://github.com/IntelAI/cerl\ntaking into account the distance from start, instead of learning to stand, stabilize, and walk (as expected).\nSparse reward environment To test ARAC in a sparse reward environment, we created SparseHumanoid-v2. The dynamic is the same as Humanoid-v2 but rewards of +1 is granted only given is the center of mass of the agent is above a threshold (set to 0.6 unit in our case). The challenge not only lies in the sparse reward property but also on the complex body dynamic that can make the agent falling down and terminating the episode. As shown in Figure 4, ARAC is the only method that can achieve non zero performance. A comparison against single agent methods in the Appendix also shows better performance for ARAC.\nSample efficiency compared with single agent methods Figure 3 (in Appendix) also shows that the sample efficiency of the population-based ARAC compares to a single SAC agent (with and without NFs) and other baselines methods (SAC, TD3). On Humanoid-v2 and Ant-v2 ARAC converges faster, reaching the 6k (4k, respectively) milestone performance after only 1M steps, while a single SAC agent requires 4M (3M, respectively) steps according to (Haarnoja et al., 2018). In general, ARAC achieves competitive results (no flat curves) and makes the most difference (faster convergence and better performance) in the biped environments.\nAttraction-repulsion ablation study To illustrate the impact of repulsive forces, we introduce a hyperparameter λ in the overall loss (Eq. 5):\nLθ,φ,λ = Lθ,φ,SAC + λLφ,AR (10)\nWe ran an ablation analysis on Humanoid-v2 by varying that coefficient. For two random states, we sampled 500 actions from all agents and mapped these actions onto a two-dimensional space (via t-SNE). Appendix Figure 5 shows that without repulsion (λ = 0), actions from all agents are entangled, while repulsion (λ > 0) forces agents to behave differently and hence explore different regions of the action space.\nThe second ablation study is dedicated to highlight the differences between a Gaussian policy (similar to Hong et al. (2018) and an NF policy under AR operators. As one can observe in Figure 6, using a Gaussian policy deteriorates the solution as the repulsive KL term drives apart the means of agents and blows up/ shrinks the variance of the Gaussian policy. On the other hand, applying the AR term on the NF layers maximizes the KL conditioned on the mean and variance of both base policies, resulting in a solution which allows sufficient exploration. More details are provided in the Appendix.\nFinally, through a toy example subject to AR, we characterize the policy’s shape when increasing the number of the radial flow policy in Figure 2 (experimental setup in Appendix). Unlike the diagonal Gaussian policy (SAC) that has symmetry constraints, increasing the number of flows allows the radial policy to adopt more complex shapes (from bottom to top)." }, { "heading": "6 CONCLUSION", "text": "In this paper, we addressed the issue of RL domains with deceptive rewards by introducing a population-based search model for optimal policies using attraction-repulsion operators. Our method relies on powerful density estimators (normalizing flows), to let policies exploit the reward landscape under AR constraints. Our ablation studies showed that (1) the strength of AR and (2) the number of flows are the two factors which predominantly affect the shape of the policy. Selecting the correct AR coefficient is therefore important to obtain good performance, while at the same time preventing premature convergence.\nEmpirical results on the MuJoCo suite demonstrate high performance of the proposed method in most settings, including with sparse rewards. Moreover, in biped environments that are known to trap algorithms into suboptimal solutions, ARAC enjoys higher sample efficiency and better performance compared to its competitors which confirms our intuitions on using AR with normalizing flows. As future steps, borrowing from multi-objective optimization literature methods could allow one to combine other diversity metrics with the performance objective, to in turn improve the coverage of the solution space among the individuals by working with the corresponding Pareto front (Horn et al., 1994)." }, { "heading": "APPENDIX", "text": "" }, { "heading": "REPRODUCIBILITY CHECKLIST", "text": "We follow the reproducibility checklist (Pineau, 2018) and point to relevant sections explaining them here. For all algorithms presented, check if you include:\n• A clear description of the algorithm, see main paper and included codebase. The proposed approach is completely described by Alg. 1 (main paper), 2 (Appendix), and 3 (Appendix). The proposed population-based method uses attraction-repulsion operators in order to enforce a better policy space coverage by different agents. • An analysis of the complexity (time, space, sample size) of the algorithm. See Appendix\nFigure 7 and 3. Experimentally, we demonstrate improvement in sample complexity as discussed in our main paper. In term of computation time, the proposed method scales linearly with the population size if agents are evaluated sequentially (as presented in Alg. 1 for clarity). However, this as mentioned in the paper, can be parallelized. All our results are obtained using M small network architectures with 1× 256-units hidden layer followed by f layers of |A|+ 2 units each (f being the number of radial flows and |A| being the action space dimension). • A link to a downloadable source code, including all dependencies. The code is included\nwith the Appendix as a zip file; all dependencies can be installed using Python’s package manager. Upon publication, the code would be available on Github.\nFor all figures and tables that present empirical results, check if you include:\n• A complete description of the data collection process, including sample size. We use standard benchmarks provided in OpenAI Gym (Brockman et al., 2016). • A link to downloadable version of the dataset or simulation environment. See:\nhttps://github.com/ • An explanation of how samples were allocated for training / validation / testing. We\ndo not use a training-validation-test split, but instead report the mean performance (and one standard deviation) of the policy at evaluation time, openai/gym for OpenAI Gym benchmarks and https://www.roboti.us/index.html for MuJoCo suite. obtained with 5 random seeds. • An explanation of any data that were excluded. We did not compare on easy environ-\nments (e.g. Reacher-v2) because all existing methods perform well on them. In that case, the improvement of our method upon baselines is incremental and not worth mentioning. • The exact number of evaluation runs. 5 seeds for MuJoCo experiments, 1M, 2M or 3M\nenvironment steps depending on the domain. • A description of how experiments were run. See Section 5 in the main paper and didactic\nexample details in Appendix. • A clear definition of the specific measure or statistics used to report results. Undis-\ncounted returns across the whole episode are reported, and in turn averaged across 5 seeds. • Clearly defined error bars. Confidence intervals and table values are always mean± 1\nstandard deviation over 5 seeds. • A description of results with central tendency (e.g. mean) and variation (e.g. stddev).\nAll results use the mean and standard deviation. • A description of the computing infrastructure used. All runs used 1 CPU for all experi-\nments (toy and MuJoCo) with 8Gb of memory." }, { "heading": "IMPACT OF REPULSIVE FORCE", "text": "To illustrate the impact of the repulsive force coefficient λ, we ran an ablation analysis by varying that coefficient (recall that the overall loss function is Lπ = Lπ,SAC + λLAR where λ = 1 in our experiment).\nFor two random states, we sampled 500 actions from all agents and mapped theses actions in a common 2-dimensional space (t-SNE).\nAs shown in the Figure above, policies trained without AR (λ = 0) result in entangled actions, while increasing the repulsive coefficient λ forces agents to have different actions and hence explore different regions of the policy space. Note that due to the specific nature of t-SNE , the policies are shown as Gaussians in a lower-dimensional embedding, while it is not necessarily the case in the true space." }, { "heading": "STABILIZING ATTRACTION-REPULSION WITH NORMALIZING FLOW", "text": "In this section, we illustrate the consequence of the AR operators with a Gaussian policy (as in Hong et al. (2018)) and our Normalizing flow policy for Ant-v2, Humanoid-v2 and HalfCheetah-v2. As shown in the figure below, AR with Gaussian policies yield worse results. One reason is that the KL term drives apart the mean and variance of the Gaussian policy which deteriorates the main objective of maximizing the reward. On the other side, our method applies the AR only on the NF layers allows enough exploration by deviating sufficiently from the main objective function." }, { "heading": "COMPARING ARAC AGAINST BASELINES ON MUJOCO TASKS", "text": "Figure 7 shows the performance of ARAC and baselines (CEM-TD3, CERL and ERL) over time steps. Learning curves are averaged over 5 random seeds and displayed with one standard deviation. Evaluation is done every 10, 000 environment steps using 10 rollouts per agent. Overall, ARAC has reasonable performance on all tasks (no flat curves) and demonstrates high performance, especially in humanoid tasks." }, { "heading": "BENEFITS OF POPULATION-BASED STRATEGIES: ARAC AGAINST SINGLE AGENTS", "text": "In this section, we highlight the benefits of the proposed population-based strategy by comparing with single agents. Figure 3 shows the performance of ARAC against a single SAC agent (with and without normalizing flows). Learning curves are averaged over 5 random seeds and displayed with one standard deviation. Evaluation is done every 10, 000 environment steps using 10 rollouts per agent. We observe a high beneficial impact on the convergence rate as well as on the performance. ARAC outperforms single agents in almost all tasks (except for HalfCheetah-v2 and Walker-v2) with large improvement. Note the high sample efficiency on humanoid environments (Humanoid-v2 and Humanoid (rllab)), where it shows faster convergence in addition to better performance. Indeed, on Humanoid (rllab) a single SAC agent reaches the 4k milestone after 4M steps (Haarnoja et al., 2018) while ARAC achieves this performance in less than 2M steps. Also, in SparseHumanoid-v2, due to its better coordinated exploration, ARAC could find a good solution faster than SAC-NF." }, { "heading": "OVERALL PERFORMANCES ON MUJOCO TASKS", "text": "ARAC CEM-TD3 CERL ERL SAC - NF SAC TD3 Ant-v2 6,044 ± 216 4, 239± 1, 048 1, 639± 564 1, 442± 819 4, 912± 954 4, 370± 173 4, 372± 900 HalfCheetah-v2 10, 264± 271 10,659 ± 1,473 5, 703± 831 6, 746± 295 8, 429± 818 11,896 ± 574 9, 543± 978 Hopper-v2 3,587 ± 65 3,655 ± 82 2, 970± 341 1, 149± 3 3, 538± 108 2, 794± 729 3, 564± 114\nHumanoid-v2 5,965 ± 51 212± 1 4, 756± 454 551± 60 5, 506± 147 5, 504± 116 71± 10 HumanoidStandup-v2 175k ± 38k 29k ± 4k 117k ± 8k 129k ± 4k 116k ± 9k 149k ± 7k 54k ± 24k\nHumanoid (rllab) 14,234 ± 7251 1, 334± 551 3, 340± 3, 340 57± 17 5, 531± 4, 435 1, 963± 1, 384 286± 151 Walker2d-v2 4,704 ± 261 4,710 ± 320 4,3860 ± 615 1, 107± 60 5,196 ± 527 3,783 ± 366 4,682 ± 539\nSparseHumanoid-v2 816 ± 20 0± 0 1.32± 2.64 8.65± 15.90 547 ± 268 88 ± 159 0± 0\nTable 2: Maximum average return after 1M (2M for Humanoid (rllab) and 600k for SparseHumanoid-v2) time steps ± one standard deviation on 5 random seeds. Bold: best methods when the gap is less than 100 units.\nARAC TRPO PPO Trust-PCL Plappert et al. (2017) Touati et al. (2018) Hong et al. (2018) HalfCheetah-v2 10,264 −15 2, 600 2, 200 5, 000 7, 700 4, 200\nWalker-v2 4,764 2, 400 4, 050 400 850 500 N/A Hopper-v2 3,588 600 3, 150 280 2, 500 400 N/A\nAnt-v2 6,044 −76 1, 000 1, 500 N/A N/A N/A Humanoid-v2 5,939 400 400 N/A N/A N/A 1, 250\nHumanoidStandup-v2 163,884 80, 000 N/A N/A N/A N/A N/A Humanoid (rllab) 4,117 23 200 N/A N/A N/A N/A\nTable 3: Performance after 1M (except for rllab which is 2M) timesteps on 5 seeds. Values taken from their corresponding papers. N/A means the values were not available in the original paper." }, { "heading": "EXPERIMENTAL PARAMETERS", "text": "Table 4 provides the hyperparameters of ARAC used to obtain results in the MuJoCo domains. The noise input for normalizing flows in SAC policies (see Sec. 2.3) is sampled from N (0, σ), where the variance σ is a function of the state (either fixed at a given value or learned)." }, { "heading": "IMPACT OF NUMBER OF FLOWS ON THE POLICY SHAPE", "text": "We used a single SAC agent with different radial flows numbers and randomly initialized weights, starting with actions centered at (0, 0). All flow parameters are `1 regularized with hyperparameter 2. The agent is trained with the classical evidence lower bound (ELBO) objective augmented with the AR loss (Eq. 1), where the coefficient of the repulsive policy π′ is given by βt = 10t+1 . Fig. 8 shows how both the NF and learned variance Gaussian policies manage to recover the target policy. We see that NF takes advantage of its flexible parametrization to adjust its density and can show asymmetric properties unlike the Gaussian distribution. This indeed can have advantage in some non symmetric environment where the Gaussian policy would be trapped into a suboptimal behavior. Finally, increasing the number of flows (from bottom to top) can lead to more complex policy’s shape." }, { "heading": "6.1 VARIANCE OF FITNESS IN THE ARCHIVE", "text": "Due to the high computation time for behavioral-diversity baselines such as DIYAN, we propose to use the agent’s fitness (i.e. undiscounted returns) as a candidate to repulse/attract from.\nFigure ?? shows the variance of the archive across three MuJoCo domains: Ant, Humanoid and HumanoidStandup. As training progresses, the clustering approach allows to maintain a high variance in the archive, preventing mode collapse to a single, \"average\" fitness." }, { "heading": "6.2 PSEUDO-CODE FOR ARAC", "text": "Algorithm 1 ARAC: Attraction-Repulsion Actor-Critic 1: Input: population size M ; number of elites K; maximum archive capacity G; archive sample\nsize n; number of evaluation rollouts R; actor coefficient p; strategy (either proactive or reactive).\n2: Initialize value function network Vν and critic network Qω 3: Initialize population of policy networks {πmφ,θ}Mm=1 4: Initialize empty archive G and randomly assign K individuals to top-K\n5: total_step← 0 6: while total_step ≤ max_step do 7: step← 0\n8: for agent m = 1 . . .M do 9: (_, step s)← rollout(πm,with noise, over 1 episode)\n10: step← step + s 11: total_step← total_step + s 12: end for\n13: C = step/K 14: for policy πe in top-K do 15: Update critic with πe for C mini-batches (Eq. 3) 16: Update value function (Eq. 4) 17: end for\n18: for agent m = 1 . . .M do 19: if policy πm is in top-K then 20: Sample n archived policies uniformly from G 21: Update actor πm for stepM .p mini-batches (Eq. 5 and 8 or 9) 22: else 23: Update actor πm for stepM .p mini-batches (Eq. 2) 24: end if 25: end for\n26: for agent m = 1 . . .M do 27: (Fitnessm, _)← rollout(πm,without noise, over R episodes) 28: end for 29: Rank population {πmφ,θ}Mm=1 and identify top-K 30: update_archive(G, {πmφ,θ}Mm=1, G) 31: end while\nCollect samples\nUpdate critic\nUpdate actors\nEvaluate actors" }, { "heading": "COMPLEMENTARY PSEUDO-CODE FOR ARAC", "text": "Algorithms 2 and 3 respectively provide the pseudo-code of functions rollout and update_archive used in Algorithm 1.\nAlgorithm 2 rollout Input: actor π; noise status; number of episodes E; replay buffer B; Fitness← 0 for episode = 1, . . . , E do s← Initial state s0 from the environment for step t = 0 . . . termination do\nif with noise then Sample noise z else Set z ← 0 end if at ∼ π(.|st, z) Observe st+1 ∼ P (·|st,at) and obtain reward rt Fitness← Fitness + rt Store transition (st,at,rt,st+1) in B\nend for end for Fitness← Fitness/E return Average fitness per episode and number of steps performed\nAlgorithm 3 update_archive Input: archive G; population of size M ; maximal archive capacity G. if |G| < G then\nAdd all agents of current population to G else c1, c2 ← 2-means(fitness of individuals in G) for agent m = 1, . . . ,M do\nAssign agent m to closest cluster c ∈ {c1, c2} based on its fitness Sample an archived agent j ∼ Uniform(c) Replace archived individual j by m\nend for end if return Updated archive G" } ]
2,019
null
SP:e4f5ca770474ba98dc7643522ea6435f0586c292
[ "This paper propose an extension to deterministic autoencoders. Motivated from VAEs, the authors propose RAEs, which replace the noise injection in the encoders of VAEs with an explicit regularization term on the latent representations. As a result, the model becomes a deterministic autoencoder with a L_2 regularization on the latent representation z. To make the model generalize well, the authors also add a decoder regularization term L_REG. In addition, due to the encoder in RAE is deterministic, the authors propose several ex-post density estimation techniques for generating samples.", "The paper studies (the more conventional) deterministic auto-encoders, as they are easier to train than VAE. To then try to maintain the model's capability of approximating the data distribution and to draw/synthesize new unseen samples, the paper both looks at imposing additional regularization terms towards a smooth decoder and proposes to sample from a latent distribution that's induced from empirical embeddings (similar to an aggregate posterior in VAE). Experiments are mostly around contrasting VAEs with the proposed RAEs in terms of comparing the quality of the generated samples." ]
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of VAEs. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without forcing it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data, we introduce an ex-post density estimation step that can be readily applied also to existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. 1
[ { "affiliations": [], "name": "DETERMINISTIC AUTOENCODERS" }, { "affiliations": [], "name": "Partha Ghosh" }, { "affiliations": [], "name": "Mehdi S. M. Sajjadi" }, { "affiliations": [], "name": "Antonio Vergari" }, { "affiliations": [], "name": "Michael Black" }, { "affiliations": [], "name": "Bernhard Schölkopf" } ]
[ { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken ELBO", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Guozhong An" ], "title": "The effects of adding noise during backpropagation training on a generalization performance", "venue": "In Neural computation,", "year": 1996 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Johannes Ballé", "Valero Laparra", "Eero P Simoncelli" ], "title": "End-to-end optimized image compression", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "M. Bauer", "A. Mnih" ], "title": "Resampled priors for variational autoencoders", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Li Yao", "Guillaume Alain", "Pascal Vincent" ], "title": "Generalized denoising auto-encoders as generative models", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine", "venue": null, "year": 2006 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In CoNLL,", "year": 2016 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "arXiv preprint arXiv:1509.00519,", "year": 2015 }, { "authors": [ "Xi Chen", "Diederik P Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational lossy autoencoder", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Mary Kathryn Cowles", "Bradley P Carlin" ], "title": "Markov chain Monte Carlo convergence diagnostics: a comparative review", "venue": "In Journal of the American Statistical Association,", "year": 1996 }, { "authors": [ "Bin Dai", "David Wipf" ], "title": "Diagnosing and enhancing VAE models", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Mathieu Germain", "Karol Gregor", "Iain Murray", "Hugo Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Partha Ghosh", "Arpan Losalka", "Michael J Black" ], "title": "Resisting adversarial attacks using Gaussian mixture variational autoencoders", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "In ACS central science,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Günter Klambauer", "Sepp Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a Nash equilibrium", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "Beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Workshop in Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "arXiv preprint arXiv:1802.04364,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improving variational inference with inverse autoregressive flow", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning Multiple Layers of Features", "venue": "Tiny Images,", "year": 2009 }, { "authors": [ "Karol Kurach", "Mario Lucic", "Xiaohua Zhai", "Marcin Michalski", "Sylvain Gelly" ], "title": "The GAN landscape: Losses, architectures, regularization, and normalization", "venue": "arXiv preprint arXiv:1807.04720,", "year": 2018 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Hugo Larochelle", "Iain Murray" ], "title": "The neural autoregressive distribution estimator", "venue": "In AISTATS,", "year": 2011 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In IEEE,", "year": 1998 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep Learning Face Attributes in the Wild", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are GANs created equal? A large-scale study", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Alireza Makhzani", "Jonathon Shlens", "Navdeep Jaitly", "Ian Goodfellow", "Brendan Frey" ], "title": "Adversarial autoencoders", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for GANs do actually converge", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Ruslan Salakhutdinov", "Nathan Srebro" ], "title": "Geometry of optimization and implicit regularization in deep learning", "venue": "arXiv preprint arXiv:1705.03071,", "year": 2017 }, { "authors": [ "Edward O Pyzer-Knapp", "Changwon Suh", "Rafael Gómez-Bombarelli", "Jorge Aguilera-Iparraguirre", "Alán Aspuru-Guzik" ], "title": "What is high-throughput virtual screening? A perspective from organic materials discovery", "venue": "Annual Review of Materials Research,", "year": 2015 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with VQ-VAE-2", "venue": "arXiv preprint arXiv:1906.00446,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Salah Rifai", "Pascal Vincent", "Xavier Muller", "Xavier Glorot", "Yoshua Bengio" ], "title": "Contractive autoencoders: Explicit invariance during feature extraction", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Salah Rifai", "Yoshua Bengio", "Yann Dauphin", "Pascal Vincent" ], "title": "A generative process for sampling contractive auto-encoders", "venue": "In ICML,", "year": 2012 }, { "authors": [ "Mihaela Rosca", "Balaji Lakshminarayanan", "Shakir Mohamed" ], "title": "Distribution matching in variational inference", "venue": "arXiv preprint arXiv:1802.06847,", "year": 2018 }, { "authors": [ "Mehdi S.M. Sajjadi", "Bernhard Schölkopf", "Michael Hirsch" ], "title": "Enhancenet: Single image superresolution through automated texture synthesis", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Mehdi S.M. Sajjadi", "Olivier Bachem", "Mario Lucic", "Olivier Bousquet", "Sylvain Gelly" ], "title": "Assessing generative models via precision and recall", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Aliaksei Severyn", "Erhardt Barth", "Stanislau Semeniuta" ], "title": "A hybrid convolutional variational autoencoder for text generation", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Jocelyn Sietsma", "Robert JF Dow" ], "title": "Creating artificial neural networks that generalize", "venue": "In Neural networks. Elsevier,", "year": 1991 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Casper Kaae Sønderby", "Jose Caballero", "Lucas Theis", "Wenzhe Shi", "Ferenc Huszár" ], "title": "Amortised MAP Inference for Image Super-resolution", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Andrey N Tikhonov", "Vasilii Iakkovlevich Arsenin" ], "title": "Solutions of ill-posed problems, volume", "venue": null, "year": 1977 }, { "authors": [ "Jakub Tomczak", "Max Welling" ], "title": "VAE with a VampPrior", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "George Tucker", "Andriy Mnih", "Chris J Maddison", "John Lawson", "Jascha Sohl-Dickstein" ], "title": "REBAR: low-variance, unbiased gradient estimates for discrete latent variable models", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In ICML,", "year": 2008 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide Residual Networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Towards deeper understanding of variational autoencoding models", "venue": "arXiv preprint arXiv:1702.08658,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative models lie at the core of machine learning. By capturing the mechanisms behind the data generation process, one can reason about data probabilistically, access and traverse the lowdimensional manifold the data is assumed to live on, and ultimately generate new data. It is therefore not surprising that generative models have gained momentum in applications such as computer vision (Sohn et al., 2015; Brock et al., 2019), NLP (Bowman et al., 2016; Severyn et al., 2017), and chemistry (Kusner et al., 2017; Jin et al., 2018; Gómez-Bombarelli et al., 2018).\nVariational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) cast learning representations for high-dimensional distributions as a variational inference problem. Learning a VAE amounts to the optimization of an objective balancing the quality of samples that are autoencoded through a stochastic encoder–decoder pair while encouraging the latent space to follow a fixed prior distribution. Since their introduction, VAEs have become one of the frameworks of choice among the different generative models. VAEs promise theoretically well-founded and more stable training than Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and more efficient sampling mechanisms than autoregressive models (Larochelle & Murray, 2011; Germain et al., 2015).\nHowever, the VAE framework is still far from delivering the promised generative mechanism, as there are several practical and theoretical challenges yet to be solved. A major weakness of VAEs is\n∗Equal contribution. 1An implementation is available at: https://github.com/ParthaEth/Regularized_\nautoencoders-RAE-\nthe tendency to strike an unsatisfying compromise between sample quality and reconstruction quality. In practice, this has been attributed to overly simplistic prior distributions (Tomczak & Welling, 2018; Dai & Wipf, 2019) or alternatively, to the inherent over-regularization induced by the KL divergence term in the VAE objective (Tolstikhin et al., 2017). Most importantly, the VAE objective itself poses several challenges as it admits trivial solutions that decouple the latent space from the input (Chen et al., 2017; Zhao et al., 2017), leading to the posterior collapse phenomenon in conjunction with powerful decoders (van den Oord et al., 2017). Furthermore, due to its variational formulation, training a VAE requires approximating expectations through sampling at the cost of increased variance in gradients (Burda et al., 2015; Tucker et al., 2017), making initialization, validation, and annealing of hyperparameters essential in practice (Bowman et al., 2016; Higgins et al., 2017; Bauer & Mnih, 2019). Lastly, even after a satisfactory convergence of the objective, the learned aggregated posterior distribution rarely matches the assumed latent prior in practice (Kingma et al., 2016; Bauer & Mnih, 2019; Dai & Wipf, 2019), ultimately hurting the quality of generated samples. All in all, much of the attention around VAEs is still directed towards “fixing” the aforementioned drawbacks associated with them.\nIn this work, we take a different route: we question whether the variational framework adopted by VAEs is necessary for generative modeling and, in particular, to obtain a smooth latent space. We propose to adopt a simpler, deterministic version of VAEs that scales better, is simpler to optimize, and, most importantly, still produces a meaningful latent space and equivalently good or better samples than VAEs or stronger alternatives, e.g., Wasserstein Autoencoders (WAEs) (Tolstikhin et al., 2017). We do so by observing that, under commonly used distributional assumptions, training a stochastic encoder–decoder pair in VAEs does not differ from training a deterministic architecture where noise is added to the decoder’s input. We investigate how to substitute this noise injection mechanism with other regularization schemes in the proposed deterministic Regularized Autoencoders (RAEs), and we thoroughly analyze how this affects performance. Finally, we equip RAEs with a generative mechanism via a simple ex-post density estimation step on the learned latent space.\nIn summary, our contributions are as follows: i) we introduce the RAE framework for generative modeling as a drop-in replacement for many common VAE architectures; ii) we propose an ex-post density estimation scheme which greatly improves sample quality for VAEs, WAEs and RAEs without the need to retrain the models; iii) we conduct a rigorous empirical evaluation to compare RAEs with VAEs and several baselines on standard image datasets and on more challenging structured domains such as molecule generation (Kusner et al., 2017; Gómez-Bombarelli et al., 2018)." }, { "heading": "2 VARIATIONAL AUTOENCODERS", "text": "For a general discussion, we consider a collection of high-dimensional i.i.d. samples X = {xi}Ni=1 drawn from the true data distribution pdata(x) over a random variable X taking values in the input space. The aim of generative modeling is to learn from X a mechanism to draw new samples xnew ∼ pdata. Variational Autoencoders provide a powerful latent variable framework to infer such a mechanism. The generative process of the VAE is defined as\nznew ∼ p(Z), xnew ∼ pθ(X |Z = znew) (1) where p(Z) is a fixed prior distribution over a low-dimensional latent space Z. A stochastic decoder\nDθ(z) = x ∼ pθ(x | z) = p(X | gθ(z)) (2) links the latent space to the input space through the likelihood distribution pθ, where gθ is an expressive non-linear function parameterized by θ.2 As a result, a VAE estimates pdata(x) as the infinite mixture model pθ(x) = ∫ pθ(x | z)p(z)dz. At the same time, the input space is mapped to the latent space via a stochastic encoder\nEφ(x) = z ∼ qφ(z |x) = q(Z | fφ(x)) (3) where qφ(z |x) is the posterior distribution given by a second function fφ parameterized by φ. Computing the marginal log-likelihood log pθ(x) is generally intractable. One therefore follows a variational approach, maximizing the evidence lower bound (ELBO) for a sample x:\nlog pθ(x) ≥ ELBO(φ, θ,x) = Ez∼qφ(z |x) log pθ(x | z)−KL(qφ(z |x)||p(z)) (4) 2With slight abuse of notation, we use lowercase letters for both random variables and their realizations,\ne.g., pθ(x | z) instead of p(X |Z = z), when it is clear to discriminate between the two.\nMaximizing Eq. 4 over data X w.r.t. model parameters φ, θ corresponds to minimizing the loss\nargmin φ,θ Ex∼pdata LELBO = Ex∼pdata LREC + LKL (5)\nwhere LREC and LKL are defined for a sample x as follows:\nLREC = −Ez∼qφ(z |x) log pθ(x | z) LKL = KL(qφ(z |x)||p(z)) (6)\nIntuitively, the reconstruction loss LREC takes into account the quality of autoencoded samples x through Dθ(Eφ(x)), while the KL-divergence term LKL encourages qφ(z |x) to match the prior p(z) for each z which acts as a regularizer during training (Hoffman & Johnson, 2016)." }, { "heading": "2.1 PRACTICE AND SHORTCOMINGS OF VAES", "text": "To fit a VAE to data through Eq. 5 one has to specify the parametric forms for p(z), qφ(z |x), pθ(x | z), and hence the deterministic mappings fφ and gθ. In practice, the choice for the above distributions is guided by trading off computational complexity with model expressiveness. In the most commonly adopted formulation of the VAE, qφ(z |x) and pθ(x | z) are assumed to be Gaussian:\nEφ(x) ∼ N (Z|µφ(x), diag(σφ(x))) Dθ(Eφ(x)) ∼ N (X|µθ(z), diag(σθ(z))) (7)\nwith means µφ, µθ and covariance parameters σφ, σθ given by fφ and gθ. In practice, the covariance of the decoder is set to the identity matrix for all z, i.e., σθ(z) = 1 (Dai & Wipf, 2019). The expectation of LREC in Eq. 6 must be approximated via k Monte Carlo point estimates. It is expected that the quality of the Monte Carlo estimate, and hence convergence during learning and sample quality increases for larger k (Burda et al., 2015). However, only a 1-sample approximation is generally carried out (Kingma & Welling, 2014) since memory and time requirements are prohibitive for large k. With the 1-sample approximation, LREC can be computed as the mean squared error between input samples and their mean reconstructions µθ by a decoder that is deterministic in practice:\nLREC = ||x− µθ(Eφ(x))||22 (8)\nGradients w.r.t. the encoder parameters φ are computed through the expectation of LREC in Eq. 6 via the reparametrization trick (Kingma & Welling, 2014) where the stochasticity of Eφ is relegated to an auxiliary random variable which does not depend on φ:\nEφ(x) = µφ(x) + σφ(x) , ∼ N (0, I) (9)\nwhere denotes the Hadamard product. An additional simplifying assumption involves fixing the prior p(z) to be a d-dimensional isotropic Gaussian N (Z |0, I). For this choice, the KL-divergence for a sample x is given in closed form: 2LKL = ||µφ(x)||22 + d+ ∑d i σφ(x)i − logσφ(x)i.\nWhile the above assumptions make VAEs easy to implement, the stochasticity in the encoder and decoder are still problematic in practice (Makhzani et al., 2016; Tolstikhin et al., 2017; Dai & Wipf, 2019). In particular, one has to carefully balance the trade-off between the LKL term and LREC during optimization (Dai & Wipf, 2019; Bauer & Mnih, 2019). A too-large weight on the LKL term can dominateLELBO, having the effect of over-regularization. As this would smooth the latent space, it can directly affect sample quality in a negative way. Heuristics to avoid this include manually finetuning or gradually annealing the importance of LKL during training (Bowman et al., 2016; Bauer & Mnih, 2019). We also observe this trade-off in a practical experiment in Appendix A.\nEven after employing the full array of approximations and “tricks” to reach convergence of Eq. 5 for a satisfactory set of parameters, there is no guarantee that the learned latent space is distributed according to the assumed prior distribution. In other words, the aggregated posterior distribution qφ(z) = Ex∼pdataq(z|x) has been shown not to conform well to p(z) after training (Tolstikhin et al., 2017; Bauer & Mnih, 2019; Dai & Wipf, 2019). This critical issue severely hinders the generative mechanism of VAEs (cf. Eq. 1) since latent codes sampled from p(z) (instead of q(z)) might lead to regions of the latent space that are previously unseen toDθ during training. This results in generating out-of-distribution samples. We refer the reader to Appendix H for a visual demonstration of this phenomenon on the latent space of VAEs. We analyze solutions to this problem in Section 4." }, { "heading": "2.2 CONSTANT-VARIANCE ENCODERS", "text": "Before introducing our fully-deterministic take on VAEs, it is worth investigating intermediate flavors of VAEs with reduced stochasticity. Analogous to what is commonly done for decoders as discussed in the previous section, one can fix the variance of qφ(z |x) to be constant for all x. This simplifies the computation of Eφ from Eq. 9 to\nECVφ (x) = µφ(x) + , ∼ N (0, σI) (10) where σ is a fixed scalar. Then, the KL loss term in a Gaussian VAE simplifies (up to a constant) to LCVKL = ||µφ(x)||22. We name this variant Constant-Variance VAEs (CV-VAEs). While CV-VAEs have been adopted in some applications such as variational image compression (Ballé et al., 2017) and adversarial robustness (Ghosh et al., 2019), to the best of our knowledge, there is no systematic study of them in the literature. We will fill this gap in our experiments in Section 6. Lastly, note that now σ in Eq.10 is not learned along the encoder as in Eq. 9. Nevertheless, it can still be fitted as an hyperparameter, e.g., by cross-validation, to maximise the model likelihood. This highlights the possibility to estimate a better parametric form for the latent space distribution after training, or in a outer-loop including training. We address this provide a more complex and flexible solution to deal with the prior structure over Z via ex-post density estimation in Section 4." }, { "heading": "3 DETERMINISTIC REGULARIZED AUTOENCODERS", "text": "Autoencoding in VAEs is defined in a probabilistic fashion: Eφ and Dθ map data points not to a single point, but rather to parameterized distributions (cf. Eq. 7). However, common implementations of VAEs as discussed in Section 2 admit a simpler, deterministic view for this probabilistic mechanism. A glance at the autoencoding mechanism of the VAE is revealing.\nThe encoder deterministically maps a data point x to mean µφ(x) and variance σφ(x) in the latent space. The input to Dθ is then simply the mean µφ(x) augmented with Gaussian noise scaled by σφ(x) via the reparametrization trick (cf. Eq. 9). In the CV-VAE, this relationship is even more obvious, as the magnitude of the noise is fixed for all data points (cf. Eq. 10). In this light, a VAE can be seen as a deterministic autoencoder where (Gaussian) noise is added to the decoder’s input.\nWe argue that this noise injection mechanism is a key factor in having a regularized decoder. Using random noise injection to regularize neural networks is a well-known technique that dates back several decades (Sietsma & Dow, 1991; An, 1996). It implicitly helps to smooth the function learned by the network at the price of increased variance in the gradients during training. In turn, decoder regularization is a key component in generalization for VAEs, as it improves random sample quality and achieves a smoother latent space. Indeed, from a generative perspective, regularization is motivated by the goal to learn a smooth latent space where similar data points x are mapped to similar latent codes z, and small variations in Z lead to reconstructions by Dθ that vary only slightly.\nWe propose to substitute noise injection with an explicit regularization scheme for the decoder. This entails the substitution of the variational framework in VAEs, which enforces regularization on the encoder posterior through LKL, with a deterministic framework that applies other flavors of decoder regularization. By removing noise injection from a CV-VAE, we are effectively left with a deterministic autoencoder (AE). Coupled with explicit regularization for the decoder, we obtain a Regularized Autoencoder (RAE). Training a RAE thus involves minimizing the simplified loss\nLRAE = LREC + βLRAEZ + λLREG (11) where LREG represents the explicit regularizer for Dθ (discussed in Section 3.1) and LRAEZ = 1/2||z||22 (resulting from simplifying LCVKL ) is equivalent to constraining the size of the learned latent space, which is still needed to prevent unbounded optimization. Finally, β and λ are two hyper parameters that balance the different loss terms.\nNote that for RAEs, no Monte Carlo approximation is required to compute LREC. This relieves the need for more samples from qφ(z |x) to achieve better image quality (cf. Appendix A). Moreover, by abandoning the variational framework and the LKL term, there is no need in RAEs for a fixed prior distribution over Z. Doing so however loses a clear generative mechanism for RAEs to sample from Z. We propose a method to regain random sampling ability in Section 4 by performing density estimation on Z ex-post, a step that is otherwise still needed for VAEs to alleviate the posterior mismatch issue." }, { "heading": "3.1 REGULARIZATION SCHEMES FOR RAES", "text": "Among possible choices for LREG, a first obvious candidate is Tikhonov regularization (Tikhonov & Arsenin, 1977) since is known to be related to the addition of low-magnitude input noise (Bishop, 2006). Training a RAE within this framework thus amounts to adopting LREG = LL2 = ||θ||22 which effectively applies weight decay on the decoder parameters θ.\nAnother option comes from the recent GAN literature where regularization is a hot topic (Kurach et al., 2018) and where injecting noise to the input of the adversarial discriminator has led to improved performance in a technique called instance noise (Sønderby et al., 2017). To enforce Lipschitz continuity on adversarial discriminators, weight clipping has been proposed (Arjovsky et al., 2017), which is however known to significantly slow down training. More successfully, a gradient penalty on the discriminator can be used similar to Gulrajani et al. (2017); Mescheder et al. (2018), yielding the objective LREG = LGP = ||∇Dθ(Eφ(x))||22 which bounds the gradient norm of the decoder w.r.t. its input.\nAdditionally, spectral normalization (SN) has been successfully proposed as an alternative way to bound the Lipschitz norm of an adversarial discriminator (Miyato et al., 2018). SN normalizes each weight matrix θ` in the decoder by an estimate of its largest singular value: θSN` = θ`/s(θ`) where s(θ`) is the current estimate obtained through the power method.\nIn light of the recent successes of deep networks without explicit regularization (Zagoruyko & Komodakis, 2016; Zhang et al., 2017), it is intriguing to question the need for explicit regularization of the decoder in order to obtain a meaningful latent space. The assumption here is that techniques such as dropout (Srivastava et al., 2014), batch normalization (Ioffe & Szegedy, 2015), adding noise during training (An, 1996) implicitly regularize the networks enough. Therefore, as a natural baseline to the LRAE objectives introduced above, we also consider the RAE framework without LREG and LRAEZ , i.e., a standard deterministic autoencoder optimizing LREC only. To complete our “autopsy” of the VAE loss, we additionally investigate deterministic autoencoders with decoder regularization, but without the LRAEZ term, as well as possible combinations of different regularizers in our RAE framework (cf. Table 3 in Appendix I).\nLastly, it is worth questioning if it is possible to formally derive our RAE framework from first principles. We answer this affirmatively, and show how to augment the ELBO optimization problem of a VAE with an explicit constraint, while not fixing a parametric form for qφ(z |x). This indeed leads to a special case of the RAE loss in Eq. 11. Specifically, we derive a regularizer like LGP for a deterministic version of the CV-VAE. Note that this derivation legitimates bounding the decoder’s gradients and as such it justifies the spectral norm regularizer as well since the latter enforces the decoder’s Lipschitzness. We accommodate the full derivation in Appendix B." }, { "heading": "4 EX-POST DENSITY ESTIMATION", "text": "By removing stochasticity and ultimately, the KL divergence term LKL from RAEs, we have simplified the original VAE objective at the cost of detaching the encoder from the prior p(z) over the latent space. This implies that i) we cannot ensure that the latent space Z is distributed according to a simple distribution (e.g., isotropic Gaussian) anymore and consequently, ii) we lose the simple mechanism provided by p(z) to sample from Z as in Eq. 1.\nAs discussed in Section 2.1, issue i) is compromising the VAE framework in any case, as reported in several works (Hoffman & Johnson, 2016; Rosca et al., 2018; Dai & Wipf, 2019). To fix this, some works extend the VAE objective by encouraging the aggregated posterior to match p(z) (Tolstikhin et al., 2017) or by utilizing more complex priors (Kingma et al., 2016; Tomczak & Welling, 2018; Bauer & Mnih, 2019).\nTo overcome both i) and ii), we instead propose to employ ex-post density estimation over Z. We fit a density estimator denoted as qδ(z) to {z = Eφ(x)|x ∈ X}. This simple approach not only fits our RAE framework well, but it can also be readily adopted for any VAE or variants thereof such as the WAE as a practical remedy to the aggregated posterior mismatch without adding any computational overhead to the costly training phase.\nThe choice of qδ(z) needs to trade-off expressiveness – to provide a good fit of an arbitrary space for Z – with simplicity, to improve generalization. For example, placing a Dirac distribution on each latent point z would allow the decoder to output only training sample reconstructions which have a high quality, but do not generalize. Striving for simplicity, we employ and compare a full covariance multivariate Gaussian with a 10-component Gaussian mixture model (GMM) in our experiments." }, { "heading": "5 RELATED WORKS", "text": "Many works have focused on diagnosing the VAE framework, the terms in its objective (Hoffman & Johnson, 2016; Zhao et al., 2017; Alemi et al., 2018), and ultimately augmenting it to solve optimization issues (Rezende & Viola, 2018; Dai & Wipf, 2019). With RAE, we argue that a simpler deterministic framework can be competitive for generative modeling.\nDeterministic denoising (Vincent et al., 2008) and contractive autoencoders (CAEs) (Rifai et al., 2011) have received attention in the past for their ability to capture a smooth data manifold. Heuristic attempts to equip them with a generative mechanism include MCMC schemes (Rifai et al., 2012; Bengio et al., 2013). However, they are hard to diagnose for convergence, require a considerable effort in tuning (Cowles & Carlin, 1996), and have not scaled beyond MNIST, leading to them being superseded by VAEs. While computing the Jacobian for CAEs (Rifai et al., 2011) is close in spirit to LGP for RAEs, the latter is much more computationally efficient. Approaches to cope with the aggregated posterior mismatch involve fixing a more expressive form for p(z) (Kingma et al., 2016; Bauer & Mnih, 2019) therefore altering the VAE objective and requiring considerable additional computational efforts. Estimating the latent space of a VAE with a second VAE (Dai & Wipf, 2019) reintroduces many of the optimization shortcomings discussed for VAEs and is much more expensive in practice compared to fitting a simple qδ(z) after training.\nAdversarial Autoencoders (AAE) (Makhzani et al., 2016) add a discriminator to a deterministic encoder–decoder pair, leading to sharper samples at the expense of higher computational overhead and the introduction of instabilities caused by the adversarial nature of the training process.\nWasserstein Autoencoders (WAE) (Tolstikhin et al., 2017) have been introduced as a generalization of AAEs by casting autoencoding as an optimal transport (OT) problem. Both stochastic and deterministic models can be trained by minimizing a relaxed OT cost function employing either an adversarial loss term or the maximum mean discrepancy score between p(z) and qφ(z) as a reg-\nularizer in place of LKL. Within the RAE framework, we look at this problem from a different perspective: instead of explicitly imposing a simple structure on Z that might impair the ability to fit high-dimensional data during training, we propose to model the latent space by an ex-post density estimation step.\nThe most successful VAE architectures for images and audio so far are variations of the VQVAE (van den Oord et al., 2017; Razavi et al., 2019). Despite the name, VQ-VAEs are neither stochastic, nor variational, but they are deterministic autoencoders. VQ-VAEs are similar to RAEs in that they adopt ex-post density estimation. However, VQ-VAEs necessitates complex discrete autoregressive density estimators and a training loss that is non-differentiable due to quantizing Z.\nLastly, RAEs share some similarities with GLO (?). However, differently from RAEs, GLO can be interpreted as a deterministic AE without and encoder, and when the latent space is built “ondemand” by optimization. On the other hand, RAEs augment deterministic decoders as in GANs with deterministic encoders." }, { "heading": "6 EXPERIMENTS", "text": "Our experiments are designed to answer the following questions: Q1: Are sample quality and latent space structure in RAEs comparable to VAEs? Q2: How do different regularizations impact RAE performance? Q3: What is the effect of ex-post density estimation on VAEs and its variants?" }, { "heading": "6.1 RAES FOR IMAGE MODELING", "text": "We evaluate all regularization schemes from Section 3.1: RAE-GP, RAE-L2, and RAE-SN. For a thorough ablation study, we also consider only adding the latent code regularizer LRAEZ to LREC (RAE), and an autoencoder without any explicit regularization (AE). We check the effect of applying one regularization scheme while not including the LRAEZ term in the AE-L2 model. As baselines, we employ the regular VAE, constant-variance VAE (CV-VAE), Wasserstein Autoencoder (WAE) with the MMD loss as a state-of-the-art method, and the recent 2-stage VAE (2sVAE) (Dai & Wipf, 2019) which performs a form of ex-post density estimation via another VAE. For a fair comparison, we use the same network architecture for all models. Further details about the architecture and training are given in Appendix C.\nWe measure the following quantities: held-out sample reconstruction quality, random sample quality, and interpolation quality. While reconstructions give us a lower bound on the best quality\nachievable by the generative model, random sample quality indicates how well the model generalizes. Finally, interpolation quality sheds light on the structure of the learned latent space. The evaluation of generative models is a nontrivial research question (Theis et al., 2016; Sajjadi et al., 2017; Lucic et al., 2018). We report here the ubiquitous Fréchet Inception Distance (FID) (Heusel et al., 2017) and we provide precision and recall scores (PRD) (Sajjadi et al., 2018) in Appendix E.\nTable 1 summarizes our main results. All of the proposed RAE variants are competitive with the VAE, WAE and 2sVAE w.r.t. generated image quality in all settings. Sampling RAEs achieve the best FIDs across all datasets when a modest 10-component GMM is employed for ex-post density estimation. Furthermore, even whenN is considered as qδ(z), RAEs rank first with the exception of MNIST, where it competes for the second position with a VAE. Our best RAE FIDs are lower than the best results reported for VAEs in the large scale comparison of (Lucic et al., 2018), challenging even the best scores reported for GANs. While we are employing a slightly different architecture than theirs, our models underwent only modest finetuning instead of an extensive hyperparameter search. A comparison of the different regularization schemes for RAEs (Q2) yields no clear winner across all settings as all perform equally well. Striving for a simpler implementation, one may prefer RAE-L2 over the GP and SN variants.\nFor completeness, we investigate applying multiple regularization schemes to our RAE models. We report the results of all possible combinations in Table 3, Appendix I. There, no significant boost of performance can be spotted when comparing to singly regularized RAEs.\nSurprisingly, the implicitly regularized RAE and AE models are shown to be able to score impressive FIDs when qδ(z) is fit through GMMs. FIDs for AEs decrease from 58.73 to 10.66 on MNIST and from 127.85 to 45.10 on CelebA – a value close to the state of the art. This is a remarkable result that follows a long series of recent confirmations that neural networks are surprisingly smooth by design (Neyshabur et al., 2017). It is also surprising that the lack of an explicitly fixed structure on the latent space of the RAE does not impede interpolation quality. This is further confirmed by the qualitative evaluation on CelebA as reported in Fig. 1 and for the other datasets in Appendix F, where RAE interpolated samples seem sharper than competitors and transitions smoother.\nOur results further confirm and quantify the effect of the aggregated posterior mismatch. In Table 1, ex-post density estimation consistently improves sample quality across all settings and models. A 10-component GMM halves FID scores from∼20 to∼10 for WAE and RAE models on MNIST and from 116 to 46 on CelebA. This is especially striking since this additional step is much cheaper and simpler than training a second-stage VAE as in 2sVAE (Q3). In summary, the results strongly support the conjecture that the simple deterministic RAE framework can challenge VAEs and stronger alternatives (Q1)." }, { "heading": "6.2 GRAMMARRAE: MODELING STRUCTURED INPUTS", "text": "We now evaluate RAEs for generating complex structured objects such as molecules and arithmetic expressions. We do this with a twofold aim: i) to investigate the latent space learned by RAE for more challenging input spaces that abide to some structural constraints, and ii) to quantify the gain of replacing the VAE in a state-of-the-art generative model with a RAE.\nTo this end, we adopt the exact architectures and experimental settings of the GrammarVAE (GVAE) (Kusner et al., 2017), which has been shown to outperform other generative alternatives such as the CharacterVAE (CVAE) (Gómez-Bombarelli et al., 2018). As in Kusner et al. (2017), we are interested in traversing the latent space learned by our models to generate samples (molecules or expressions) that best fit some downstream metric. This is done by Bayesian optimization (BO) by considering the log(1 +MSE) (lower is better) for the generated expressions w.r.t. some ground truth points, and the water-octanol partition coefficient (logP ) (Pyzer-Knapp et al., 2015) (higher is better) in the case of molecules. A well-behaved latent space will not only generate molecules or expressions with better scores during the BO step, but it will also contain syntactically valid ones, i.e., , samples abide to a grammar of rules describing the problem.\nFigure 2 summarizes our results over 5 trials of BO. Our GRAEs (Grammar RAE) achieve better average scores than CVAEs and GVAEs in generating expressions and molecules. This is visible also for the three best samples and their scores for all models, with the exception of the first best expression of GVAE. We include in the comparison also the GCVVAE, the equivalent of a CV-VAE\nfor structured objects, as an additional baseline. We can observe that while the GCVVAE delivers better average scores for the simpler task of generating equations (even though the single three best equations are on par with GRAE), when generating molecules GRAEs deliver samples associated to much higher scores.\nMore interestingly, while GRAEs are almost equivalent to GVAEs for the easier task of generating expressions, the proportion of syntactically valid molecules for GRAEs greatly improves over GVAEs (from 28% to 72%)." }, { "heading": "7 CONCLUSION", "text": "While the theoretical derivation of the VAE has helped popularize the framework for generative modeling, recent works have started to expose some discrepancies between theory and practice. We have shown that viewing sampling in VAEs as noise injection to enforce smoothness can enable one to distill a deterministic autoencoding framework that is compatible with several regularization techniques to learn a meaningful latent space. We have demonstrated that such an autoencoding framework can generate comparable or better samples than VAEs while getting around the practical drawbacks tied to a stochastic framework. Furthermore, we have shown that our solution of fitting a simple density estimator on the learned latent space consistently improves sample quality both for the proposed RAE framework as well as for VAEs, WAEs, and 2sVAEs which solves the mismatch between the prior and the aggregated posterior in VAEs." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Anant Raj, Matthias Bauer, Paul Rubenstein and Soubhik Sanyal for fruitful discussions.\nvariatio delectat!" }, { "heading": "A RECONSTRUCTION AND REGULARIZATION TRADE-OFF", "text": "We train a VAE on MNIST while monitoring the test set reconstruction quality by FID. Figure 3 (left) clearly shows the impact of more expensive k > 1 Monte Carlo approximations of Eq. 7 on sample quality during training. The commonly used 1-sample approximation is a clear limitation for VAE training.\nFigure 3 (right) depicts the inherent trade-off between reconstruction and random sample quality in VAEs. Enforcing structure and smoothness in the latent space of a VAE affects random sample quality in a negative way. In practice, a compromise needs to be made, ultimately leading to subpar performance." }, { "heading": "B A PROBABILISTIC DERIVATION OF REGULARIZATION", "text": "In this section, we propose an alternative view on enforcing smoothness on the output of Dθ by augmenting the ELBO optimization problem for VAEs with an explicit constraint. While we keep the Gaussianity assumptions over a stochastic Dθ and p(z) for convenience, we however are not fixing a parametric form for qφ(z |x) yet. We discuss next how some parametric restrictions over qφ(z |x) lead to a variation of the RAE framework in Eq. 11, specifically the introduction of LGP as a regularizer of a deterministic version of the CV-VAE. To start, we augment Eq. 5 as:\nargmin φ,θ Ex∼pdata(X) LREC + LKL (12)\ns.t. ||Dθ(z1)−Dθ(z2)||p < ∀ z1, z2 ∼ qφ(z |x) ∀x ∼ pdata where Dθ(z) = µθ(Eφ(x)) and the constraint on the decoder encodes that the output has to vary, in the sense of an Lp norm, only by a small amount for any two possible draws from the encoding of x . Let Dθ(z) : Rdim(z) → Rdim(x) be given by a set of dim(x) given by {Di(z) : Rdim(z) → R1}. Now we can upper bound the quantity ||Dθ(z1)−Dθ(z2)||p by dim(x)∗supi{||Di(z1)−Di(z2)||p}. Using mean value theorem ||Di(z1)−Di(z2)||p ≤ ||∇tDi((1− t)z1+ tz2)||p ∗ ||z1−z2||p. Hence supi{||Di(z1)−Di(z2)||p} ≤ supi{||∇tDi((1− t)z1 + tz2)||p ∗ ||z1− z2||p}. Now if we choose the domain of qφ(z |x) to be isotopic the contribution of ||z2− z1||p to the aforementioned quantity becomes a constant factor. Loosely speaking it is the radius of the bounding ball of domain of qφ(z |x). Hence the above term simplifies to supi{||∇tDi((1 − t)z1 + tz2)||p}. Recognizing that here z1 and z2 is arbitrary lets us simplify this further to supi{||∇zDi(z)||p} From this form of the smoothness constraint, it is apparent why the choice of a parametric form for qφ(z |x) can be impactful during training. For a compactly supported isotropic PDF qφ(z|x), the\nextension of the support sup{||z1 − z2||p} would depend on its entropy H(qφ(z |x)). through some functional r. For instance, a uniform posterior over a hypersphere in z would ascertain r(H(qφ(z |x))) ∼= eH(qφ(z |x))/n where n is the dimensionality of the latent space. Intuitively, one would look for parametric distributions that do not favor overfitting, e.g., degenerating in Dirac-deltas (minimal entropy and support) along any dimensions. To this end, an isotropic nature of qφ(z|x) would favor such a robustness against decoder over-fitting. We can now rewrite the constraint as r(H(qφ(z |x))) · sup{||∇Dθ(z)‖|p} < (13) The LKL term can be expressed in terms of H(qφ(z |x)), by decomposing it as LKL = LCE − LH, where LH = H(qφ(z |x)) and LCE = H(qφ(z |x), p(z)) represents a cross-entropy term. Therefore, the constrained problem in Eq. 12 can be written in a Lagrangian formulation by including Eq. 13:\nargmin φ,θ Ex∼pdata LREC + LCE − LH + λLLANG (14)\nwhere LLANG = r(H(qφ(z |x))) ∗ ||∇Dθ(z)||p. We argue that a reasonable simplifying assumption for qφ(z |x) is to fix H(qφ(z |x)) to a single constant for all samples x. Intuitively, this can be understood as fixing the variance in qφ(z |x) as we did for the CV-VAE in Section 2.2. With this simplification, Eq. 14 further reduces to\nargmin φ,θ Ex∼pdata(X) LREC + LCE + λ||∇Dθ(z)||p (15)\nWe can see that ||∇Dθ(z)||p results to be the gradient penalty LGP and LCE = ||z||22 corresponds to LRAEKL , thus recovering our RAE framework as presented in Eq. 11." }, { "heading": "C NETWORK ARCHITECTURE, TRAINING DETAILS AND EVALUATION", "text": "We follow the models adopted by Tolstikhin et al. (2017) with the difference that we consistently apply batch normalization (Ioffe & Szegedy, 2015). The latent space dimension is 16 for MNIST (LeCun et al., 1998), 128 for CIFAR-10 (Krizhevsky & Hinton, 2009) and 64 for CelebA (Liu et al., 2015).\nFor all experiments, we use the Adam optimizer with a starting learning rate of 10−3 which is cut in half every time the validation loss plateaus. All models are trained for a maximum of 100 epochs on MNIST and CIFAR and 70 epochs on CelebA. We use a mini-batch size of 100 and pad MNIST digits with zeros to make the size 32×32. We use the official train, validation and test splits of CelebA. For MNIST and CIFAR, we set aside 10k train samples for validation. For random sample evaluation, we draw samples from N (0, I) for VAE and WAE-MMD and for all remaining models, samples are drawn from a multivariate Gaussian whose parameters (mean and covariance) are estimated using training set embeddings. For the GMM density estimation, we also utilize the training set embeddings for fitting and validation set embeddings to verify that GMM models are not over fitting to training embeddings. However, due to the very low number of mixture components (10), we did not encounter overfitting at this step. The GMM parameters are estimated by running EM for at most 100 iterations.\nMNIST CIFAR 10 CELEBA\nENCODER: x ∈ R32×32 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC16×M\nx ∈ R32×32 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC128×M\nx ∈ R64×64 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC64×M\nDECODER: z ∈ R16 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT1\nz ∈ R128 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT1\nz ∈ R64 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT128 → BN → RELU → CONVT1\nConvn represents a convolutional layer with n filters. All convolutions Convn and transposed convolutions ConvTn have a filter size of 4×4 for MNIST and CIFAR-10 and 5×5 for CELEBA. They\nall have a stride of size 2 except for the last convolutional layer in the decoder. Finally, M = 1 for all models except for the VAE which has M = 2 as the encoder has to produce both mean and variance for each input." }, { "heading": "D EVALUATION SETUP", "text": "We compute the FID of the reconstructions of random validation samples against the test set to evaluate reconstruction quality. For evaluating generative modeling capabilities, we compute the FID between the test data and randomly drawn samples from a single Gaussian that is either the isotropic p(z) fixed for VAEs and WAEs, a learned second stage VAE for 2sVAEs, or a single Gaussian fit to qδ(z) for CV-VAEs and RAEs. For all models, we also evaluate random samples from a 10-component Gaussian Mixture model (GMM) fit to qδ(z). Using only 10 components prevents us from overfitting (which would indeed give good FIDs when compared with the test set)3.\nFor interpolations, we report the FID for the furthest interpolation points resulted by applying spherical interpolation to randomly selected validation reconstruction pairs.\nWe use 10k samples for all FID and PRD evaluations. Scores for random samples are evaluated against the test set. Reconstruction scores are computed from validation set reconstructions against the respective test set. Interpolation scores are computed by interpolating latent codes of a pair of randomly chosen validation embeddings vs test set samples. The visualized interpolation samples are interpolations between two randomly chosen test set images." }, { "heading": "E EVALUATION BY PRECISION AND RECALL", "text": "3We note that fitting GMMs with up to 100 components only improved results marginally. Additionally, we provide nearest-neighbours from the training set in Appendix G to show that our models are not overfitting." }, { "heading": "F MORE QUALITATIVE RESULTS", "text": "G INVESTIGATING OVERFITTING\nH VISUALIZING EX-POST DENSITY ESTIMATION\nTo visualize that ex-post density estimation does indeed help reduce the mismatch between the aggregated posterior and the prior we train a VAE on the MNIST dataset whose latent space is 2 dimensional. The unique advantage of this setting is that one can simply visualize the density of test sample in the latent space by plotting them as a scatterplot. As it can be seen from figure 11, an expressive density estimator effectively fixes the miss-match and this as reported earlier results in better sample quality.\nHere in figure 12 we perform the same visualization on with all the models trained on the MNIST dataset as employed on our large evaluation in Table 1. Clearly every model depicts rather large mismatch between aggregate posterior and prior. Once again the advantage of ex-post density estimate is clearly visible." }, { "heading": "I COMBINING MULTIPLE REGULARIZATION TERMS", "text": "The rather intriguing facts that AE without explicit decoder regularization performs reasonably well as seen from table 1, indicates that convolutional neural networks when combined with gradient based optimizers inherit some implicit regularization. This motivates us to investigate a few different combinations of regularizations e.g. we regularize the decoder of an auto-encoder while drop the regularization in the z space. The results of this experiment is reported in the row marked AE-L2 in table 3.\nFurther more a recent GAN literature ? report that often a combination of regularizations boost performance of neural networks. Following this, we combine multiple regularization techniques in out framework. However note that this rather drastically increases the hyper parameters and the models become harder to train and goes against the core theme of this work, which strives for simplicity. Hence we perform simplistic effort to tune all the hyper parameters to see if this can provide boost in the performance, which seem not to be the case. These experiments are summarized in the second half of the table 3" } ]
2,020
null
SP:7cd001a35175d8565c046093dcf070ba7fa988d6
[ " This paper proposes using the features learned through Contrastive Predictive Coding as a means for reward shaping. Specifically, they propose to cluster the embedding using the clusters to provide feedback to the agent by applying a positive reward when the agent enters the goal cluster. In more complex domains they add another negative distance term of the embedding of the current state and goal state. Finally, they provide empirical evidence of their algorithm working in toy domains (such as four rooms and U-maze) as well as a set of control environments including AntMaze and Pendulum.", "The paper proposes a reward shaping method which aim to tackle sparse reward tasks. The paper first trains a representation using contrastive predictive coding and then uses the learned representation to provide feedback to the control agent. The main difference from the previous work (i.e. CPC) is that the paper uses the learned representation for reward shaping, not for learning on top of these representation. This is an interesting research topic. " ]
While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge. In this work, we propose an effective reward shaping method through predictive coding to tackle sparse reward problems. By learning predictive representations offline and using these representations for reward shaping, we gain access to reward signals that understand the structure and dynamics of the environment. In particular, our method achieves better learning by providing reward signals that 1) understand environment dynamics 2) emphasize on features most useful for learning 3) resist noise in learned representations through reward accumulation. We demonstrate the usefulness of this approach in different domains ranging from robotic manipulation to navigation, and we show that reward signals produced through predictive coding are as effective for learning as hand-crafted rewards.
[ { "affiliations": [], "name": "SPARSE REWARDS" } ]
[ { "authors": [ "Rishabh Agarwal", "Chen Liang", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Learning to generalize from sparse and underspecified rewards", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Devon Hjelm", "Aaron Courville" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Joel Veness", "Michael Bowling" ], "title": "Investigating contingency awareness using atari 2600 games", "venue": "In Twenty-Sixth AAAI Conference on Artificial Intelligence,", "year": 2012 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Hugo Caselles-Dupré", "Michael Garcia-Ortiz", "David Filliat" ], "title": "Continual state representation learning for reinforcement learning using generative replay", "venue": "arXiv preprint arXiv:1810.03880,", "year": 2018 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder–decoder approaches. Syntax, Semantics and Structure in Statistical Translation", "venue": null, "year": 2014 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of information theory", "venue": null, "year": 2012 }, { "authors": [ "Sam Michael Devlin", "Daniel Kudenko" ], "title": "Dynamic potential-based reward shaping", "venue": "In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems,", "year": 2012 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Montezumas revenge solved by go-explore, a new algorithm for hard-exploration problems (sets records on pitfall", "venue": "URL https://eng.uber.com/go-explore/", "year": 2018 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Yang Gao", "Francesca Toni" ], "title": "Potential based reward shaping for hierarchical reinforcement learning", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Dibya Ghosh", "Abhishek Gupta", "Sergey Levine" ], "title": "Learning actionable representations with goalconditioned policies", "venue": "arXiv preprint arXiv:1811.07819,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Sehoon Ha", "Jie Tan", "George Tucker", "Sergey Levine" ], "title": "Learning to walk via deep reinforcement learning", "venue": "arXiv preprint arXiv:1812.11103,", "year": 2018 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/hill-a/ stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in first-person multiplayer games with population-based deep reinforcement learning", "venue": "arXiv preprint arXiv:1807.01281,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Sameera Lanka", "Tianfu Wu" ], "title": "Archer: Aggressive rewards to counter bias in hindsight experience replay", "venue": "arXiv preprint arXiv:1809.02070,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Near-optimal representation learning for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1810.01257,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In ICML,", "year": 1999 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: Discrete stochastic dynamic programming", "venue": null, "year": 1994 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Wiele", "Vlad Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playing solving sparse reward tasks from scratch", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tim Salimans", "Richard Chen" ], "title": "Learning montezuma’s revenge from a single demonstration", "venue": "arXiv preprint arXiv:1812.03381,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jingwei Zhang", "Jost Tobias Springenberg", "Joschka Boedecker", "Wolfram Burgard" ], "title": "Deep reinforcement learning with successor features for navigation across similar environments", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Haosheng Zou", "Tongzheng Ren", "Dong Yan", "Hang Su", "Jun Zhu" ], "title": "Reward shaping via metalearning", "venue": "arXiv preprint arXiv:1901.09330,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent progress in deep reinforcement learning (DRL) has enabled robots to learn and execute complex tasks, ranging from game playing (Jaderberg et al., 2018; OpenAI, 2019), robotic manipulations (Andrychowicz et al., 2017; Haarnoja et al., 2018), to navigation (Zhang et al., 2017). However, in many scenarios learning depends heavily on meaningful and frequent feedback from the environment for the agent to learn and correct behaviors. As a result, reinforcement learning (RL) problems with sparse rewards still remain a difficult challenge (Riedmiller et al., 2018; Agarwal et al., 2019).\nIn a sparse reward setting, the agent typically explores without receiving any reward, until it enters a small subset of the environment space (the ”goal”). Due to lack of frequent feedback from the environment, learning in sparse reward problems is typically hard, and heavily relies on the agent entering the ”goal” during exploration. A possible way to tackle this is through reward shaping (Devlin & Kudenko, 2012; Zou et al., 2019; Gao & Toni, 2015), where manually designed rewards are added to the environment to guide the agent towards finding the ”goal”; however, this approach often requires domain knowledge of the environment, and may bias learning if the shaped rewards are not robust (Ng et al., 1999).\nRL problems often benefit from representation learning (Bengio et al., 2013), which studies the transformation of raw observations of an environment (sensors, images, coordinates etc) into a more meaningful form, such that the agent can more easily extract information useful for learning. Intuitively, raw states contain redundant or irrelevant information about the environment, which the agent must take time to learn to distinguish and remove; representation learning directly tackles this problem by either eliminating redundant dimensions (Kingma & Welling, 2013; van den Oord et al., 2017) or emphasizing more useful elements of the state (Nachum et al., 2018a). Much of the prior work on representation learning focuses on generative approaches to model the environment, but some recent work also studies optimizations that learn important features (Ghosh et al., 2018).\nIn this paper, we tackle the challenge of DRL to solve sparse reward tasks: we apply representation learning to provide the agent meaningful rewards without the need for domain knowledge. In particular, we propose to use predictive coding in an unsupervised fashion to extract features that maximize the mutual information (MI) between consecutive states in a state trajectory. These predictive features are expected to have the potential to simplify the structure of an environment’s state space: they are optimized to both summarize the past and predict the future, capturing the most important\nelements of the environment dynamics. We show this method is useful for model-free learning from either raw states or images, and can be applied on top of any general deep reinforcement learning algorithms such as PPO (Schulman et al., 2017).\nAlthough MI has traditionally been difficult to compute, recent advances have suggested optimizing on a tractable lower bound on the quantity (Hjelm et al., 2018; Belghazi et al., 2018; Oord et al., 2018). We adopt one such method, Contrastive Predictive Coding (Oord et al., 2018), to extract features that maximize MI between consecutive states in trajectories collected during exploration (one thing worth noting is that this method is not restricted to specific predictive coding schemes such as CPC). Such features are then used for simple reward shaping in representation space to provide the agent better feedback in sparse reward problems. We demonstrate the validity of our method through extensive numerical simulations in a wide range of control environments such as maze navigation, robot locomotion, and robotic arm manipulation (Figure 4). In particular, we show that using these predictive features, we provide reward signals as effective for learning as handshaped rewards, which encode domain and task knowledge.\nThis paper is structured as follows: We begin by providing preliminary information in Section 2 and discussing relevant work in Section 3; then, we explain and illustrate the proposed method in Section 4; lastly, we present experiment results in Section 5, and conclude the paper by discussing the results and pointing out future work in Section 6." }, { "heading": "2 PRELIMINARIES", "text": "Reinforcement Learning and Reward Shaping: This paper assumes a finite-horizon Markov Decision Process (MDP) (Puterman, 1994), defined by a tuple (S,A,P, r, γ, T ). Here, S ∈ Rd denotes the state space,A ∈ Rm denotes the action space, P : S ×A×S → R+ denotes the state transition distribution, r : S × A → R denotes the reward function, γ ∈ [0, 1] is the discount factor, and finally T is the horizon. At each step t, the action at ∈ A is sampled from a policy distribution πθ(at|st) where s ∈ S and θ is the policy parameter. After transiting into the next state by sampling from p(st+1|at, st), where p ∈ P , the agent receives a scalar reward r(st, at). The agent continues performing actions until it enters a terminal state or t reaches the horizon, by when the agent has completed one episode. We let τ denote the sequence of states that the agent enters in one episode.\nWith such definition, the goal of RL is to learn a policy πθ∗(at|st) that maximizes the expected discounted reward Eπ,P [R(τ0:T−1)] = Eπ,P [ ∑T−1 0 γ\ntr(st, at)], where expectation is taken on the possible trajectories τ and starting states x0. In this paper, we assume model-free learning, meaning the agent does not have access to P . Reward shaping essentially replaces the original MDP with a new one, whose reward function is now r′(st, at). In this paper, reward shaping is done to train a policy πr′ that maximizes the expected discounted reward in the original MDP, i.e. Eπr′ ,P [R(τ0:T−1)].\nMutual Information and Predictive Coding: Mutual information measures the amount of information obtained about one random variable after observing another random variable (Cover & Thomas, 2012). Formally, given two random variables X and Y with joint distribution p(x, y) and marginal densities p(x) and p(y), their MI is defined as the KL-divergence between joint density and product of marginal densities:\nMI(X;Y ) = DKL(p(x, y)‖p(x)p(y)) = Ep(x,y)[log p(x, y)\np(x)p(y) ]. (1)\nPredictive coding in this paper aims to maximize the MI between consecutive states in the same state trajectory. As MI is difficult to compute, we adopt the method of optimizing on a lower bound, InfoNCE (Oord et al., 2018), which takes the current context ct to predict a future state st+k:\nMI(st+k; ct) ≥ ES [log f(zt+k, ct) f(zt+k, ct) + ∑ sj∈S f(zj , ct) ] (2)\nHere, f(x, y) is optimized through cross entropy to model a density ratio: f(x, y) ∝ P (x|y)P (x) . zt+k is the embedding of state xt+k by the encoder, and ct is obtained by summarizing the embeddings of previous n states in a segment of a trajectory, zt−n+1:t, through a gated recurrent unit (Cho et al.,\n2014). Intuitively, the context ct pays attention to the evolution of states in order to summarize the past and predict the future; thus, it forces the encoder to extract only the essential dynamical elements of the environment, elements that encapsulate state evolution.\n3 RELEVANT WORKS\nOur paper uses the method of Contrastive Predictive Coding (CPC) (Oord et al., 2018), which includes experiments in the domain of RL. In the CPC paper, the InfoNCE is applied to the LSTM component (Hochreiter & Schmidhuber, 1997) of an A2C architecture (Mnih et al., 2016; Espeholt et al., 2018). The LSTM maps every state observation to an embedding, which is then directly used for learning. This differs from our approach, where we train on pre-collected trajectories to obtain embeddings, and only use these embeddings to provide rewards to the agent, which still learns on the raw states. Our approach has two main advantages: 1) Preprocessing states allows us to collect exploration-focused trajectories, and obtain embeddings that are suitable for multi-tasking. 2) Using embeddings to provide rewards is more resistant to noises in embeddings than using them as training features, since in the former case we care more about the accumulation of rewards across multiple states, where the noises are diluted.\nApplying representation learning to RL has been studied in many prior works (Nachum et al., 2018a; Ghosh et al., 2018; Oord et al., 2018; Caselles-Dupré et al., 2018). In\na recent paper on actionable representation (Ghosh et al., 2018), representation learning is also applied to providing the agent useful reward signals. In actionable representation paper, states are treated as goals, and embeddings are optimized in a way that the distance between two states reflects the difference between the policies required to reach them. This is fundamentally different from our approach, which aims to extract features that have predictive qualities. Furthermore, computing actionable representation requires trained goal-conditioned policies as a part of the optimization, which is a strict requirement, while this paper aims to produce useful representations without needing access to trained policies.\nLastly, VQ-VAE (van den Oord et al., 2017) is a generative approach that provides a principled way of extracting low-dimensional features. In contrast to VAE (Kingma & Welling, 2013), it outputs a discrete codebook, and the prior distribution is learned rather than static. VQ-VAE could be useful for removing redundant information from raw states, which may speed up learning; however, since the goal of VQ-VAE is reconstruction, it does not put emphasis on features that are particularly useful for learning, nor does it attempt to understand the environment dynamics across long segments of states. Our use of predictive coding is thus be a better fit for reinforcement learning, as we emphasize on features that help understand the evolution of states rather than reconstruct each individual state." }, { "heading": "4 METHOD", "text": "" }, { "heading": "4.1 LEARNING PREDICTIVE FEATURES", "text": "In this step, the key idea is to train, in an unsupervised fashion and prior to learning, an encoder that extracts predictive features from states. We begin by collecting state trajectories through initial exploration. While there is no requirement for any specific exploration strategies, we used random exploration with manual resets to collect diverse trajectories without need for pre-trained policies.\nFrom these trajectories, segments of consecutive states are sampled and used to train a CPC encoder: for each segment, a fixed number of beginning states (xt−n:t) are encoded into latent embeddings (zt−n:t) and summarized by a GRU (ct = fGRU (zt−n:t)); the output of the GRU was referred to in the original paper as ”context”, which is then used to predict the embedding of each remaining state (zt+k) in the segment through a score function st+k = exp(zt+kWkct). More architectural details can be found in Appendix A." }, { "heading": "4.2 APPLYING PREDICTIVE FEATURES TO RL", "text": "The trained embeddings are then used for reward shaping in 2 ways:\nClustering: We sample random states from the environment and cluster their corresponding embeddings. We found that clustering these embeddings provide meaningful information on the global structure of the environments, i.e. states that are naturally close to each other (Figure 1). We then use these clusters to provide additional reward signals to the agent, in particular awarding the agent a positive reward for entering the cluster that contains the goal. This way, the agent is more likely to enter the subset of state space that is close to the goal (Figure 2), and learning is faster and more stable.\nnon-linear structures such as mazes (Figure 3).\nIn the next section, we study both the embeddings obtained from initial training as well as both of the applications discussed above." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we will address the following questions:\n1. Does predictive coding simplify environment structure?\n2. Do these simplified representations provide reward feedback to agent in sparse reward tasks?\nWe show the result of applying predictive coding to learning in five different environments: GridWorld, HalfCheetah, Pendulum, Reacher, and AntMaze (Figure 4). These environment form a rich set of standard DRL experiments, covering both discrete and continuous action spaces.\nGridWorld environment (Chevalier-Boisvert et al., 2018) is used as the primary experiment for discrete settings. Although it has simple dynamics, GridWorld environments have a variety of maze structures that pose an interesting representation learning problem: two points between a wall are close distance wise, but they might require the agent to take a dramatically long route to reach one\npoint from the other. In our experiments, we demonstrate that predictive coding is able to understand the global structure of any arbitrary maze, and map states in the latent space according to their actual distances in the maze.\nWe use Mujoco (Todorov et al., 2012) and classic Gym (Brockman et al., 2016) environments for continuous control settings. HalfCheetah, Pendulum, and Reacher are environments in continuous setting with richer dynamics than GridWorld. While they have simpler global structures (e.g. pendulum moves in a circle), we show that predictive coding is able to understand a hierarchy of features in these environments, and that these features can be directly incorporated to speed up learning. AntMaze environments (Nachum et al., 2018b) have continuous control dynamics as well as interesting maze structures similar to GridWorld. As a result, representations learned by predictive coding in AntMaze show both understanding of global structure and formation of hierarchies of features.\nTo show the generality of our approach, we use an open-sourced implementation of CPC1 and standard DRL baselines2 for training. A compilation of all code used in this paper will be made publicly available. Further details on the experimental setup and architectural details can be found in Appendix A." }, { "heading": "5.1 DISCRETE SETTINGS", "text": "" }, { "heading": "5.1.1 GRIDWORLD", "text": "GridWorld environments are 2D 17-by-17 square mazes with different layouts. We include 3 different layouts: one with a U-shaped barrier (U-Maze), one with 4 rooms divides walls (Four-Room), and one with 4 square blocks (Block-Maze).\nThe result of applying CPC representation learning to GridWorld is shown in Figure 5. In all three experiments, CPC learns representations that reflect the true distance between two points. For instance, as seen in the plot, states between the barrier (blue and green states) in U-Maze are mapped to points far from each other in the representation state, although being distance-wise close. Similarly, the states in the blue room and green room of Four-Rooms are mapped to two ends of a long band in the representation space, with states in the red and pink rooms located in the middle. This reflects the need for the agent to go through the red and pinks rooms\nroom. Lastly, representation learned in Block-Maze restores the true structure of the maze from imaged-based observations.\nWe assess the quality of the embeddings by analysing how much they reflect the true distances between states. For each maze environment, we sample random pairs of points from the maze, and plot the true distance (obtained by running A∗ in the original maze ) between the pair against their distance in the representation space (L-2 norm). Additionally, we run linear regression to obtain a line of best fit for each plot. The result for Barrier Maze is shown in Figure 6, and for all three mazes we observe a strong correlation between the true distances and the distances in latent space.\nWe show the result of applying clustering to sparse reward problems in GridWorld: the agent is randomly spawn and navigates in the maze, and it only receives a positive reward when reaching the goal. Without additional reward signals, the agent might not be able to reach the goal if it is spawn in a far-away location. To make use of clusters obtained from CPC representations, we train a two step policy: the agent first goes to the cluster that contains the goal, and then to the goal. We reward the agent for reaching the cluster in the first step, and use environment reward for the second step. This way, the agent receives more signal in all locations in the maze. An illustration of a policy trained this way is shown in Figure 2.\nWe find that this approach leads to better learning in all three mazes (Figure 7). In all three experiments, the reward (adjusted to remove cluster reward) converges faster and to higher values with clustering. This is likely because the additional reward for entering the cluster guides the agent towards states that are naturally close to the\ngoal, allowing the agent to reach the goal more frequently during exploration. Table 1 shows that the policy learned through clustering significantly outperforms the policy learned in standard setting." }, { "heading": "5.2 CONTINUOUS SETTINGS", "text": "We study four different control environments: Pendulum, AntMaze, Reacher, HalfCheetah (latter two moved to Appendix A).\nEnvironments used in this section have much richer dynamics than GridWorld. We show that the learned representations simplify these environments both by understanding the global structure of the environment (AntMaze) and encoding meaningful hierarchies of features.\nUnlike GridWorld, simple clustering strategies are less effective because of the large state space. Instead, we directly optimize on the agent’s distance to goal in representation space. We show that this simple approach can lead to improvement in learning as much as using hand-shaped rewards. For each environment, we include 4 setups, each setup using 3 random seeds:\n1. Sparse reward (blue): Providing an agent a small positive reward when it reaches the goal 2. Hand-shaped reward (pink): Providing an agent a hand-shaped reward at each step (hand-\nshaped reward) 3. Raw distance (green): Providing an agent a negative penalty on the distance between cur-\nrent state and goal state (plus sparse reward to goal for AntMaze only) 4. Embedding distance (orange): Providing an agent a negative penalty on the distance be-\ntween current state and goal state in the representation space (plus control penalty on action norm for HalfCheetah only).\nThe details of the hand-shaped reward schemes can be found in Appendix A\n5.2.1 PENDULUM\nThe Pendulum is a classic control problem where a rigid arm freely swings about a fixed center. To imitate the swing-up task, we set the goal states to have angles θ ∈ [−0.1, 0.1], where an angle of 0 means the arms is pointing straight up. Additinally, we consider ”the goal is reached” only after the agent manages to stay among goal states for 5 consecutive steps. For hand-shaped reward, we penalize the magnitude of the angle encourage the arm to maintain an upward position.\nAs shown in Figure 8, clustering the embeddings produces clusters primarily by angle. However, there is\nstill lot of overlapping between clusters when we only consider the position of the arm, suggesting that the arm’s velocity also plays a less important role. This hierarchy between position and velocity was very beneficial for learning, as the agent would learn to swing up the arm first before decelerating the arm to maintain top positions. Indeed, optimizing on distance in embedding space (orange) led to much faster learning than all other setups, including the hand-shaped rewards (pink), where as optimizing on distance in the original space (green) leads to sub-optimal behaviors such as reducing velocity too early." }, { "heading": "5.2.2 ANTMAZE", "text": "Finally, in AntMaze, a four-legged robot navigate in a maze-like environment until it reaches the goal area. Naturally, AntMaze has both the rich dynamics of a robot as well as the structure of a maze environment, and is helpful for illustrating the power of predictive coding to both reflect the global structure of an environment while picking the most important features. In our experiment setup, we use a thin wall to block passage in the maze, so that a state on the other side of the wall\nmay appear close to the agent, but is in reality very far. We set the goal state to be the lower left corner of the maze; for hand-shaped rewards, we assign each state in the maze a correct direction to move in and award the agent for moving in that particular direction.\nhand-shaped reward (pink)." }, { "heading": "6 DISCUSSION", "text": "" }, { "heading": "6.1 EMBEDDINGS AS FEATURES VS REWARD SHAPING", "text": "Mutual information maximization is notoriously difficult to optimize, and may easily produce noisy embeddings without sufficient training data. Our approach mitigates this problem in two aspects. Firstly, we preprocess the embeddings instead of training them online, so that the agent avoids learning on noisy embeddings that are not fully trained. Secondly, instead of using the embeddings as features to train on, we use them to provide reward signals to the agent, who still learns using the raw features. This approach is more resilient to noises in embeddings, especially for policy gradient methods, since we care more about total rewards across trajectories than the rewards of individual states.\nWe illustrate the above points by comparing training with cpc features and our approach in the Reacher environment. Both experiments use the same architectural and algorithmic settings, and the raw states used for training the embeddings contain information about the goal. As shown in Figure 10, the use of cpc embed-\ndings as features lead to insignificant improvements to learning, where as using these embeddings to only provide reward signals led to the best performance." }, { "heading": "6.2 TEXTURE AGNOSTIC PREDICTIVE CODING", "text": "In this section, we discuss an important advantage of predictive coding: since embeddings are optimized to maximize their predictive abilities, less meaningful information such as the texture of the\nbackground from the raw observations are ignored. This property of predictive coding differentiates itself from other unsupervised learning methods such as autoencoder or VAE, which inevitably pay attention to the background in order to reconstruct the original states.\nThis property of predictive coding makes it possible for an agent to learn in a constantly changing environment, such as a game (Bellemare et al., 2012). We showcase this property by training the encoder with states from the Pendulum environment with multiple backgrounds (bricks, sand, cloth), and assess the encoder’s generalizability to new textures (such as wood). Figure 11 contains examples of textures used for training and validation, as well as the clustering results of their corresponding embeddings. In particular, two textures resulted in very similar embeddings, even though the encoder had never seen the wooden texture during training. We conclude that predictive coding has learned to ignore the background, which contains less important information about the state dynamics." }, { "heading": "6.3 EXPLORATION", "text": "Our proposed method relies on the quality of trajectories collected at the beginning, which in turn depends on the initial exploration. Although in most cases, exploration with random policies or simple goal-conditioned policies is enough to produce trajectories that expose the environment dynamics, there are environments with extremely long horizons or large state spaces that effective exploration without learning the task is difficult. An example is Montezuma’s Revenge, which is currently unsolvable without algorithms designed to tackle hard exploration (Ecoffet et al., 2018) or expert demonstration data (Salimans & Chen, 2018). For future work, a direction is to train the embeddings online, i.e. during training the agent. This way, trajectories collected may be more relevant to the particular training task, and we could obtain high-quality embeddings (high-quality in the sense that they are useful for the particular training task) without thorough exploration of the environment. As discussed in the first paragraph, learning on intermediate embeddings may be undesirable, so the agent should initially rely purely on environment rewards, and only start receiving rewards shaped by embeddings after the embeddings reach a certain quality mark (for CPC, this could be checked by the InfoNCE loss, which indicates a lower bound on mutual information)." }, { "heading": "6.4 NEGATIVE DISTANCE AS A POTENTIAL FUNCTION", "text": "One of the major issues with reward shaping is that it could potentially bias learning, leading the agent to learn a suboptimal policy. In a previous work on policy invariance under reward transformation (Ng et al., 1999), the notion of potential-based reward shaping function establishes the conditions for guaranteeing unbiased learning. With the use of predictive coding, our negative distance reward scheme is a potential function if the latent space induced by the encoder preserves the metric properties of the original state space. A rigorous formulation of this setting necessitates a mathematical analysis, which is out of the scope of this preliminary study." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ENVIRONMENT DESCRIPTIONS", "text": "GridWorld: The agent is a point that can move horizontally or vertically in a 2-D maze structure. Each state observation is a compact encoding of the maze, with each layer containing information about the placement of the walls, the goal position, and the agent position respectively. The goal state is one in which the goal position and the agent position are the same.\nHalfCheetah: The agent is a 2-d two-legged robot freely moving on a horizontal plane. Each state observation encodes the position of the agent, the angles and the velocities of each joint. The goal states are any states where the agent position x ≥ 10. Pendulum: The agent is a rigid arm moving about a fixed center by applying a force to its tip. Each state observation encodes the angle and the velocity of the arm. The goal states are any states where the agent achieves an angle θ ∈ [−0.1, 0.1]. Reacher: The agent is a robotic arm with two rigid sections connected by a joint. The agent moves about a fixed center on the plane by applying a force to each rigid section. Each state observation encodes the angles of two sections, the position of the tip of the arm, and the direction to goal. The goal state is one where the tip of the arm touches a certain point on the plane.\nAntMaze: The agent is a four-legged robot freely moving in a maze structure. Each state observation encodes the position of the agent, the angles and the velocities of each joint. Instead of learning from scratch, we pre-train a simple direction-conditioned walking policy and learn to navigate in this environment. The goal states are a square area with side 2 and center (0, 0)." }, { "heading": "Environment State/Goal Dimensions Action Dimensions Maximum Steps", "text": "" }, { "heading": "A.2 NETWORK PARAMETERS AND HYPERPARAMETERS FOR LEARNING", "text": "For all experiments in this paper we use a standard PPO baseline (Hill et al., 2018) to train. We use two fully-connected layers with output size 64 for the actor critic. We provide hyperparameters below, and refer to Hill et al. (2018) for all other implementation details." }, { "heading": "A.3 TRAINING CPC", "text": "We follow the approach proposed in Oord et al. (2018) to obtain predictive features from states. The details are provided in the two tables below. All hyperparameters for training CPC are tuned through performing grid search." }, { "heading": "Environment Encoder Type Autoregressive Model Type", "text": "" }, { "heading": "A.4 HAND-SHAPED REWARD SCHEMES", "text": "For continuous environments, we show that using predictive features provides reward signals as informative as hand-shaped rewards, which encodes domain and task knowledge about the environment. In particular, each hand-shaped reward scheme contains information about where the goal is and how to get there. We provide details below for each environment.\n• HalfCheetah: (xt+1 − xt)− α ‖at‖, where xt is the x position of the agent at time t, and at is the action input to the agent at time t.\n• Pendulum: −(|θt|2+α |ωt|+β ‖at‖), where θt, ωt, and at are the angle, angular velocity, and action input at time t respectively.\n• Reacher: −(‖pt − pg‖ + α ‖at‖), where pt is the position of the tip of the arm at time t, pg is the position of the goal, and at is the action input at time t.\n• AntMaze: −((xt+1− xt)cos(θt)+ (yt+1− yt)sin(θt)), where xt, yt are the x, y positions of the agent at time t, and θt is the hard-coded direction to travel in." }, { "heading": "A.5 EXPERIMENT RESULT FOR HALFCHEETAH", "text": "Figure 12: Illustrations of embeddings of HalfCheetah and learning curves for different reward schemes. Embeddings (bottom-left, visualized by T-SNE) cluster by x position of the agent (top-left, with x-aixs being x position and y-axis being y-position of the agent).\nFigure 12 shows the visualization of embeddings of random states as well as the comparison between different reward setups. The predictive features focus on the most significant element of the environment: the x position of the agent, allowing us to recover horizontally spaced clusters.\nAs the plot shows, optimizing on the negative distance in embedding space (orange), optimizing on the negative distance in raw space (green), and optimizing hand-shaped rewards (pink) all lead to similar performances. While this is less convincing than other environments, we observe that optimizing negative distance in representation space is not\nworse; rather, it is likely that optimizing on negative distance in raw space is already good enough, since the x-position of the agent has the largest variance among all other features." }, { "heading": "A.6 EXPERIMENT RESULT FOR REACHER", "text": "In the Reacher environment, a robotic arm has two sections with a joint in the middle. The arm’s one end is fixed at the center of the plane, and its goal is typically to reach a certain point on the plane with the other end of the arm. This can be naturally formulated as a sparse reward problem (Lanka & Wu, 2018), where the agent receives no reward until it reaches the goal state. For hand-shaped reward, we penalize the distance between the tip of the arm and the goal point plus the L-2 norm of agent’s action for stability.\nSimilar to Pendulum, embeddings of Reacher achieves clusters primarily by the position of the arm, as shown\nin Figure 13 (note that each axis is the angle of one section of the arm). Consequently, optimizing on the distance in embedding space (orange) allows the agent to quickly learn to move directly towards the goal. This turned out to be more stable than using hand-shaped reward (pink), which sometimes led the agent to occasionally overshoot and miss the goal." } ]
2,019
null
SP:1e4d48aca131f5ff12775ba51dd1176397038d59
[ "This paper studies the problem of exploration in reinforcement learning. The key idea is to learn a goal-conditioned agent and do exploration by selecting goals at the frontier of previously visited states. This frontier is estimated using an extension of prior work (Pong 2019). The method is evaluated on two continuous control environments (2D navigation, manipulation), where it seems to outperform baselines.", "This paper proposes a new exploration algorithm by proposing a new way of generating intrinsic rewards. Specifically, the authors propose to maintain a \"novelty frontier\" which consists of states that have low-likelihood under some likelihood model trained on their replay buffer. The authors propose to sample from the novelty frontier using a scheme similar to a prior method called Skew-Fit, but replace the VAE with a kernel-based density model. To construct an exploration reward, the authors estimate the KL divergence between the resulting policy state distribution and the desired `state distribution, where the desire state distribution is a Gaussian centered around a point sampled from the novelty frontier." ]
In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In this work, we introduce an exploration approach based on maximizing the entropy of the visited states while learning a goal-conditioned policy. The main contribution of this work is to introduce a novel reward function which combined with a goal proposing scheme, increases the entropy of the visited states faster compared to the prior work. This improves the exploration capability of the agent, and therefore enhances the agent’s chance to solve sparse reward problems more efficiently. Our empirical studies demonstrate the superiority of the proposed method to solve different sparse reward problems in comparison to the prior work.
[]
[ { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "arXiv preprint arXiv:1703.01732,", "year": 2017 }, { "authors": [ "Joshua Achiam", "Harrison Edwards", "Dario Amodei", "Pieter Abbeel" ], "title": "Variational option discovery algorithms", "venue": "arXiv preprint arXiv:1807.10299,", "year": 2018 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Yevgen Chebotar", "Mrinal Kalakrishnan", "Ali Yahya", "Adrian Li", "Stefan Schaal", "Sergey Levine" ], "title": "Path integral guided policy search", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Cédric Colas", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "Curious: Intrinsically motivated multi-task, multi-goal reinforcement learning", "venue": "arXiv preprint arXiv:1810.06284,", "year": 2018 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Kai Olav Ellefsen", "Jean-Baptiste Mouret", "Jeff Clune" ], "title": "Neural modularity helps organisms evolve to learn new skills without forgetting old skills", "venue": "PLoS computational biology,", "year": 2015 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "arXiv preprint arXiv:1705.06366,", "year": 2017 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "Charles Miller Grinstead", "James Laurie Snell" ], "title": "Introduction to probability", "venue": "American Mathematical Soc.,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Thomas C Hales" ], "title": "A proof of the kepler conjecture", "venue": "Annals of mathematics,", "year": 2005 }, { "authors": [ "David Held", "Xinyang Geng", "Carlos Florensa", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": null, "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Learning to achieve goals", "venue": "In IJCAI,", "year": 1993 }, { "authors": [ "Mrinal Kalakrishnan", "Ludovic Righetti", "Peter Pastor", "Stefan Schaal" ], "title": "Learning force control policies for compliant manipulation", "venue": "In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2011 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Vitchyr H Pong", "Murtaza Dalal", "Steven Lin", "Ashvin Nair", "Shikhar Bahl", "Sergey Levine" ], "title": "Skewfit: State-covering self-supervised reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "Nikolay Savinov", "Alexey Dosovitskiy", "Vladlen Koltun" ], "title": "Semi-parametric topological memory for navigation", "venue": "arXiv preprint arXiv:1803.00653,", "year": 2018 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A possibility for implementing curiosity and boredom in model-building neural controllers", "venue": "In Proc. of the international conference on simulation of adaptive behavior: From animals to animats,", "year": 1991 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Vivek Veeriah", "Junhyuk Oh", "Satinder Singh" ], "title": "Many-goals reinforcement learning", "venue": "arXiv preprint arXiv:1806.09605,", "year": 2018 }, { "authors": [ "David Warde-Farley", "Tom Van de Wiele", "Tejas Kulkarni", "Catalin Ionescu", "Steven Hansen", "Volodymyr Mnih" ], "title": "Unsupervised control through non-parametric discriminative rewards", "venue": "arXiv preprint arXiv:1811.11359,", "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nReinforcement Learning (RL) is based on performing exploratory actions in a trial-and-error manner and reinforcing those actions that result in superior reward outcomes. Exploration plays an important role in solving a given sequential decision-making problem. A RL agent cannot improve its behaviour without receiving rewards exceeding the expectation of the agent, and this happens only as the consequence of properly exploring the environment.\nIn this paper, we propose a method to train a policy which efficiently explores a continuous state space. Our method is particularly well-suited to solve sequential decisionmaking tasks with sparse terminal rewards, i.e., rewards received at the end of a successful interaction with the environment. We propose to directly maximize the entropy of the history states by exploiting the mutual information between the history states and a number of reference states. To achieve this, we introduce a novel reward function which, given the references, shapes the distribution of the history states. This reward function, combined with goal proposing learning frameworks, maximizes the entropy of the history states. We demonstrate that this way of directly maximizing the state entropy, compared to indirectly maximizing the mutual information (WardeFarley et al., 2018; Pong et al., 2019) improves the ex-\nploration of the state space as well as the convergence speed at solving tasks with sparse terminal rewards.\nMaximizing the mutual information between the visited states and the goal states, I(S;G), results in a natural exploration of the environment while learning to reach to different goal states (Warde-Farley et al., 2018; Pong et al., 2019). The mutual information can be written as I(S;G) = h(G)− h(G|S); therefore maximizing the mutual information is equivalent to maximizing the en-\ntropy of the goal state while reducing the conditional entropy (conditioned on the goal state). The first term, encourages the agent to choose its own goal states as diverse as possible, therefore improving the exploration, and the second term forces the agent to reach the different goals it has specified for itself, i.e., training a goal-conditioned policy, π(.|s, g). Instead of maximizing the mutual information, we propose to maximize the entropy of the visited states directly, i.e., maximizing h(S) = h(Z) +h(S|Z)−h(Z|S), where Z is a random variable that represents the reference points of promising areas for exploration. Therefore, in our formulation, we have an extra term, h(S|Z), which encourages maximizing the entropy of the state conditioned on the reference points. This extra term, implemented by the proposed reward function, helps the agent to explore better at the vicinity of the references. We call our method Skew-Explore, since similar to Skew-Fit introduced by Pong et al. (2019), it skews the distribution of the references toward the less visited states, but instead of directly reaching the goals, it explores the surrounding areas of them.\nWe experimentally demonstrate that the new reward function enables an agent to explore the state space more efficiently in terms of covering larger areas in less time compared to the earlier methods. Furthermore, we demonstrate that our RL agent is capable of solving long-term sequential decision-making problems with sparse rewards faster. We apply the method to three simulated tasks, including a problem to find a trajectory of a YuMi robot end-effector, to open a door of a box, pressing a button inside the box and closing the door. In this case, the sparse reward is given only when the button is pressed and the door is closed, i.e., at the end of about one minute of continuous interaction with the environment. To validate appropriateness of the trajectory found in simulation, we deployed it on a real YuMi robot, as shown in Figure 1. The main contributions of this paper can be summarized as (1) introducing a novel reward function which increases the entropy of the history states much faster compared to the prior work, and (2) experimentally demonstrating the superiority of the proposed algorithm to solve three different sparse reward sequential decision-making problems." }, { "heading": "2 RELATED WORK", "text": "Prior works have studied different algorithms for addressing the exploration problem. In this section, we summarize related works in the domain where rewards from the environment are sparse or absent.\nIntrinsic Reward: One way to encourage exploration is to define an intrinsically-motivated reward, including methods that assimilate the definition of curiosity in psychology (Oudeyer et al., 2007; Pathak et al., 2017). These methods have found success in domains like video games (Ostrovski et al., 2017; Burda et al., 2018). In these approaches, the ”novelty”, ”curiosity” or ”surprise” of a state is computed as an intrinsic reward using mechanisms such as state-visiting count and prediction error (Schmidhuber, 1991; Stadie et al., 2015; Achiam & Sastry, 2017; Pathak et al., 2017). By considering this information, the agent is encouraged to search for areas that are less visited or have complex dynamics. However, as pointed out by Ecoffet et al. (2019), an agent driven by intrinsic reward may suffer from the problem of detaching from the frontiers of high intrinsic reward area. Due to catastrophic forgetting, it may not be able to go back to previous areas that have not yet been fully explored (Kirkpatrick et al., 2017; Ellefsen et al., 2015). Our method is able to keep tracking the novelty frontier and train policy to explore different areas in the frontier.\nDiverse Skill/Option Discovery: Methods that aim to learn a set of behaviours which are distinct from each other, allow the agent to interact with the environment without rewards for a particular task. Gregor et al. (2016) introduced an option discovery technique based on maximizing the mutual information between the options and the final states of the trajectories. Eysenbach et al. (2018); Florensa et al. (2017a); Savinov et al. (2018) proposed to learn a fixed set of skills by maximizing the mutual information through an internal objective computed using a discriminator. Achiam et al. (2018) extended the prior works by considering the whole trajectories and introduced a curriculum learning approach that gradually increases the number of skills to be learned. In these works, the exploration is encouraged implicitly through learning diverse skills. However, it is difficult to control the direction of exploration. In our method, we maintain a proposing module which tracks the global information of the states we have visited so far, and keep proposing reference points that guide the agent to the more promising areas for exploration.\nSelf-Goal Proposing: Self-goal proposing methods are often combined with a goal-conditioned policy (Kaelbling, 1993; Andrychowicz et al., 2017), where a goal (or task) generation model is\ntrained jointly with a goal reaching policy. The agent receives rewards in terms of completing the internal tasks which makes it possible to explore the state space without any supervision from the environment. Sukhbaatar et al. (2017) described a scheme with two agents. The first one proposes tasks by performing a sequence of actions and the other repeats the actions in reverse order. Held et al. (2018) introduced a method that automatically label and propose goals at the appropriate level of difficulty using adversarial training. Similar works are proposed by Colas et al. (2018); Veeriah et al. (2018); Florensa et al. (2017b), where goals are selected based on the learning progress. WardeFarley et al. (2018) trained a goal-conditioned policy by maximizing the mutual information between the goal states and the achieved states. The goals are selected from the agent’s recent experience with strategies. Later, Pong et al. (2019) applied a similar idea of using mutual information. They maximize the entropy of a goal sampling distribution. The focus of these methods is on learning a policy that can reach diverse goals. Although gradually increasing the scale of the goal proposing network, the agent may eventually cover the entire state space, exploration itself is not efficient. In our work, we adopt the same idea of maximizing the entropy of the goal sampling distribution by Pong et al. (2019). However, instead of using the goal-conditioned policy, we introduce a reference point-conditioned policy which greatly increases the efficiency of exploration." }, { "heading": "3 SKEW-EXPLORE: SEARCHING FOR THE SPARSE REWARD", "text": "We discuss the policy learning problem in continuous state and action spaces, which we model as an infinite-horizon Markov decision process (MDP). The MDP is fully characterized by a tuple (S,A, pa(s, s′), R′a(s, s′)), where S, the state space, and A, the action space, are subsets of Rn, the unknown transition probability p : S × A × S → [0, inf) indicates the probability density function of the next state s′ given the current state s ∈ S and the action a ∈ A. For each transition, the space associated environment E emits an extrinsic reward according to function R′ : S × A → R. The objective of the agent is to maximize the discounted return, i.e. return R = ∑∞ ts=0\nγtsrts , where γ is a discounted factor and rts is the reward received at each step ts. In this study, we consider an agent interacting in an environment E with sparse reward. The sparse reward r is modelled as a truncated Gaussian function with a narrow range. From previous interactions, the agent holds an interaction set It, in which transaction triples (sj ,aj , sj+1),∀j ∈ {1, · · · , T − 1} are contained. We also extract the states sj from It to form a history state set St, which contains all visited states by the agent until iteration t. The objective of our method is to find an arbitrary external goal in a continuous state space and converge to a policy that maximizes the R as fast as possible. This involves two processes 1) Find the external reward through efficient exploration. 2) Converge to a policy that maximizes R once the external reward is found.\nWe can use the entropy of the history state set as a neutral objective to encourage exploration, since an agent that maximizes this objective should have visited all valid states uniformly. To describe it mathematically, we define a random variable S to represent the history states that the agent has visited. The distribution of S is estimated from the history state set St. Our goal is to encourage exploration by maximizing the entropy h(S) of the history states. However, using the entropy as the intrinsic reward directly may suffer from problems similar to other intrinsic motivated methods (Schmidhuber, 1991; Stadie et al., 2015; Achiam & Sastry, 2017; Pathak et al., 2017). As the reward of the same state is changing, the agent has the risk of detaching from the frontiers of high intrinsic reward area.\nWe introduce a concept called novelty frontier reference point, which can be sampled from a distribution that represents the novelty frontier (Ecoffet et al., 2019). The novelty frontier defined in our work represents the areas near the states with lower density in distribution p(s). The frontier reference points are sampled after the distribution of the novelty frontier is updated. We define a Z to represent all the history frontier reference points with probability density p(z) estimated from a set Zt that contains all novelty frontier reference points until iteration t. The conditional probability p(s|z) defines the behaviour of the agent with respect to each reference point. In this work, we model this behaviour using a state distribution function Kz(s − z) parameterized by the displacement between the state and the reference point. The function Kz needs to be chosen carefully as it should satisfy our expectation of the policy behaviour and also, provides an informative reward signal to train the policy. Mathematically, we can rewrite p(s) as p(s) = ∫ f(s|z)p(z)dz = ∫ Kz(s − z)p(z)dz. Generally, Kz(·) can be different for different z.\nHowever, to reduce the complexity of learning, we constrain Kz(·) to be consistent for any z, meaningKz(s−z) = K(s−z). The definition ofK(·) satisfies the definition of a kernel function. Using K(s− z), p(s) can then be further represented as\np(s) = ∫ K(s− z)p(z)dz\n= (K ∗ p)(s). By considering the law of convolution of probability distributions, we obtain S = Z + N, where N is a random variable characterized by a density function K(·). Now with this setup, we are able to to analyze our method’s performance using information theory. By considering the entropy’s relationship with mutual information h(S) = h(S|Z) +I(S;Z), we receive the final decomposition of our objective under the novelty frontier reference point-conditioned policy framework\nh(S) = h(Z) + h(S|Z)− h(Z|S). (1) Eq. 1 indicates that in order to maximize the h(S), we can individually maximize/minimize each term while making other terms fixed. In the following section, we will explain the optimization process in detail.\n3.1 MAXIMIZING h(Z): OBTAINING AN EXPANDING SET OF NOVELTY FRONTIER REFERENCE POINTS\nAs introduced above, h(Z) is the entropy estimated from the novelty frontier reference points setZt. To increase h(Z), we need to add a new reference point to Zt such that, the entropy estimated form Zt+1 is larger than the entropy estimated from Zt. In our method, the frontier reference points are sampled from the novelty frontier distribution which represents less history areas according to the current history states. Pong et al. (2019) proposed a method to skew the distribution of the history states using importance sampling, such that states with lower density can be proposed more often. In our work, we use a similar way to estimate the novelty frontier distribution. There are three steps in our process. In the first step, we estimate the p(s) from St using a density estimator e.g. Kernel Density Estimation (KDE). In the second step, we sample Q states {s0, · · · , sQ} from p(s), and compute the normalized weight for each state using Eq. 2\nwi = 1\nYα p(si)p(si) α α ∈ [− inf, 0), Yα = N∑ n=1 p(s = sQ)p(s = sQ) α, (2)\nwhere Yα is a normalizing constant. The state with lower p(s) has higher weight and vice versa. Finally, we utilize a generative model training scheme Tg(·, ·) (e.g. weighted KDE), together with sampled states and weights to get a skewed distribution pskewed(s) = Tg({s0, · · · , sn}, {w0, · · · , wn}) to represent the novelty frontier distribution.\nIf Q is big enough, by choosing a α appropriately, we are able to expand our frontiers after each iteration. As a consequence, the distribution estimated from Zt will become more and more uniform and its range will become larger and larger, just like annual ring of the tree. The entropy of a continuous uniform function U(p, q) is ln(p − q) and if the distribution has a larger range, the entropy is larger as well. Fig 2 illustrates the estimated frontier distribution skewed from p(s).\n3.2 MAXIMIZING h(S|Z)− h(Z|S): INCREASING THE EXPLORATION RANGE AROUND REFERENCE POINTS\nThe conditional entropy of h(S|Z) and h(Z|S) are highly correlated, maximizing/minimizing them individually are difficult. Therefore, in this section, we consider to maximize h(S|Z) - h(Z|S)\nas a whole. Using the relation S = Z + N, we rewrite the expression as h(S|Z) − h(Z|S) = h(Z + N|Z)− h(Z|Z + N), which can be further simplified (see Appendix D) as\nh(Z|S)− h(S|Z) ≥ h(N)− h(Z). This implies that there is a lower bound for the expression h(S|Z)− h(Z|S). For a fixed h(Z), we can maximize the lower bound h(N)− h(Z) by increasing h(N). h(N) is related to the shape and variance of the exploration distribution near the reference point. In our method, we model N as a Gaussian distribution with zero mean. In an ideal case, we would like to have as large variance as possible. However, increasing the variance also results in learning difficulty, as we need a longer trajectory to evaluate the performance and more samples to update the network. Therefore, we use the variance to control the trade-off between exploration efficiency and learning efficiency." }, { "heading": "3.3 DESIGNING THE REWARD FUNCTION", "text": "Our algorithm requires the policy to move around a given reference point, and the distribution of the states in the trajectory should follow a Gaussian distribution centered at the reference point. In this section, we introduce an intrinsic reward function to train such policy by minimizing the the Kullback-Leibler (KL) divergence between the trajectory distribution and the desired Gaussian distribution. For each given reference point zi, we collect a trajectory τi by running the policy with the given reference point, indicated as π(zi), for M steps. Then, we estimate the probability density of each state s in τi, referred to as pτi(s), using a density estimator. Finally, we check the probability density of s in the Gaussian distribution centered at zi, referred to as pzi(s). The KL-divergence between the trajectory distribution pτi and the desired distribution pzi is formulated as follows\nDKL(pτi(·) | pzi(·)) = Ezi∼(Zt),τi∼π(zi),s∼τipτi(s) log pτi(s)\npzi(s) . (3)\nTo minimize the KL-divergence, the intrinsic reward of s with respect to zi is computed as rint(s, zi) = log(pzi(s))− log(pτi(s))). (4)\nThe intrinsic reward function measures the difference between the desired density of s in the trajectory and the actual density achieved. The reward is positive when the actual density is smaller than the desired one, when states in the trajectory are too far from the reference point, and the reward is negative when the actual density is larger than the desired one, when the agent stays too long at the reference point. An extrinsic reward rext(s) is provided by the environment and the total reward of a time step is defined as the weighted sum of the intrinsic and the extrinsic reward. The extrinsic reward should be much greater than the intrinsic reward. The reward of each time step r(s, zi) is defined as\nr(s, zi) = wint · rint(s, zi) + wext · rext(s), (5) where, wint and wext are respective weights for internal and external rewards. The performance of the policy is closely related to the set Zt, as it records the reference points we used to train the policy until iteration t. As described in section 3.1, while we increase the entropy h(Z) by proposing new reference points form the novelty frontier to train the policy, the policy gradually gain skills to explore different areas. When a state with a large extrinsic reward is discovered, the policy eventually ignores all given reference points and converge to reach the state with the extrinsic reward. Algorithm 1 shows the whole Skew-Explore algorithm using pseudo code and our implementation of the algorithm is available online 1.\n1https://anonymous.4open.science/r/b4596073-4cbc-4ac6-b85b-e9a786909058/\nAlgorithm 1 Skew-Explore 1: procedure SKEW-EXPLORE 2: History state set S0 = {} 3: History novelty frontier reference points set Z0 = {} 4: Randomly sample L novelty frontier points zi 5: Z0 = Z0 ∪ {z1, ..., zL}. 6: for t = 1, 2, 3... do 7: Collect a set of states sq by running policy giving different frontier reference point zi. 8: Compute reward for each state using Eq. 5. 9: Update history state set St = St−1 ∪ {s1, ..., sQ} and estimate p(s) from St 10: Estimate the novelty frontier distribution pskewed(s) by skewing p(s). 11: Sample L new novelty frontier points zl ∼ pskewed(s). 12: Update history novelty frontier reference points set Zt = Zt−1 ∪ {z1, ..., zL}. 13: Update policy according to the rewards\n3.4 SCALING TO HIGHER DIMENSIONAL STATES\n4 EXPERIMENT\nIn this section, we evaluate our algorithm from three perspectives. 1) How efficient is our algorithm in terms of exploring the entire state space, and how different choice of variance affects the efficiency? 2) Is our algorithm able to converge to a stable solution for tasks with sparse reward? 3) Is our algorithm able to solve a complicated sparse reward task with long horizon? The implementation details of the experiments can be found in Appendix E. Two metrics are considered to evaluate the performance. They are the state distribution entropy h(S) and the coverage, which are estimated from history state set St. We describe how we estimate the two metrics and their difference in Appendix A and B. A short video regarding the experiments can be found online 2." }, { "heading": "4.1 POWER OF EXPLORATION", "text": "In the first experiment, we evaluate our algorithm in term of the efficiency of exploring the state space. We test our algorithm in two simulated environments, the PointMaze and the DoorOpen environments (Fig. 4). In the PointMaze environment, a point agent is controlled to move inside a maze with narrow passages. In the DoorOpen environment, a YuMi robot can open a door by grabbing the door handle. The PointMaze environment was previously used by Florensa et al. (2017a); Eysenbach et al. (2018); Pong et al. (2019), whilst environments similar to the DoorOpen environment were used by Kalakrishnan et al. (2011); Chebotar et al. (2017); Pong et al. (2019). The objective of the tasks is to explore the entire state space in a minimum amount of time. In order to evaluate the performance, we measure the efficiency as the overall coverage and the entropy of the density estimated from all history states. We compare our algorithm with two baseline algorithms: the random network distillation (RND) proposed by Burda et al. (2018) which is an approach using prediction error as the intrinsic reward, and Skew-Fit proposed by Pong et al. (2019) which combines a goal proposing network with a goal-conditioned policy.\nWe consider two configurations. The first one is the proximal policy optimization (PPO) Schulman et al. (2017) together with a Long Short-Term Memory network (LSTM) Hochreiter & Schmidhuber (1997). The second configuration is soft actor-critic (SAC) Haarnoja et al. (2018) and hindsight experience replay (HER) Andrychowicz et al. (2017). We note here that RND is only tested with PPO and LSTM as it was not designed for off-policy methods. For each configuration, we run\n2https://www.dropbox.com/s/xxw7ug3lnud3h0j/video_submission.mp4\n12 times and compare the mean and variance. Fig. 5 shows the results for all 5 configurations. We can see that our method, SAC+HER+Skew-Explore, makes both coverage and entropy increase faster than the other methods. It also increases with relatively small variances. Fig 6 illustrates\nhow coverage changes for both our method Skew-Explore and Skew-Fit. In this figure, we see how our method is able to cover the state space (area in this case) faster than Skew-Fit. To further analyze how different choices of the variance of N affects the exploration efficiency, several values of variances 02, 12, 22, 32, 42 and 62 are tested in the PointMaze environment. After 80 iterations, estimated entropy 40.2±1.2, 49.3±0.7, 51.4±0.8, 51.4±0.7, 51.4±0.7 and 48.9±3.3 are received. We observe that while the variance increases, the performance first increases and then decreases." }, { "heading": "4.2 POWER OF SOLVING A SINGLE SPARSE REWARD TASK", "text": "The power of exploration is an important aspect, but we also want our algorithm to converge to a stable policy that maximizes the extrinsic reward for different sparse reward tasks. To this end, we use the same environments (Fig. 4) as in the previous experiment. In each environment, we select five uniformly distributed target points from the area of interest and assign extrinsic rewards when the agent reaches these points. For each target point, we train an individual policy to reach it. Hypothetically, influenced by the extrinsic reward, the agent eventually ignores the internal\ngoals generated by the goal proposing module and reaches the target points consistently. In order to evaluate the performance of our algorithm, we measure how reliably the agent is able to reach each target point. To this end, we collect the final 10 states from the 10 most recent trajectories and define criteria for convergence as the percentage of receiving the extrinsic rewards. If more than 90% of the states receive the extrinsic reward with a standard deviation of less then 3%, we say the agent solved the task successfully. In this experiment, we use the configuration SAC+HER+SkewExplore which achieves the best performance in the previous experiment. Table 1 shows how many trajectories the algorithm needs to reach the criteria of convergence. This experiment thus shows that our algorithm is able to solve a sparse reward task, by obtaining a policy with a limited number of trajectories. Additional results can be found in Appendix F." }, { "heading": "4.3 TASK WITH A LONG HORIZON AND REAL WORLD DEMONSTRATION", "text": "In the third experiment, we evaluate the ability of our algorithm in terms of solving a sparse reward task with a long horizon and test the performance of the converged policy using a real world YuMi robot. We increase the complexity of the environment by adding a box and a button to the DoorOpen environment used in the previous two experiments. We design a task called OpenPressClose which needs a long sequence of procedures to be solved. The sequence includes 1) open the box, 2) press the button inside the box and 3) close the box. The extrinsic sparse reward is only given to the agent after all procedures in the sequence are done. This task is exceptionally challenging as each intermediate procedure requires a set of continuous actions in correct order to be achieved and no intermediate reward is provided to guide the search. Therefore this task requires the power of efficient exploration to discover the state that provides an extrinsic reward. If the algorithm fails to explore efficiently, the rewarding state would never be found and no policy will be learned. The results show that the algorithm is able to discover the extrinsic reward and converge to a stable solution. Fig 7a shows the change of average extrinsic reward per step over iterations and Fig 7b shows the converged policy in sequential order. The resulting policy is deployed to a real world YuMi robot as shown in Fig 1. A demonstration of the real robot solving the task can be found in the video." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose an algorithm named Skew-Explore, a general framework for continuous state exploration. Inspired by Skew-Fit Pong et al. (2019), the main idea of Skew-Explore is to encourage exploration around the novelty frontier reference points proposed by a latent variable proposing network. The algorithm is able to track the global information of entropy of density distribution estimated by the states stored in a history state set, which helps to maximize a corresponded metrics, namely entropy and coverage. Two experiments are conducted to test the power of SkewExplore on the exploration problem and the single sparse reward problem. In the first experiment, we found that our algorithm Skew-Explore, using SAC and HER together, has the fastest exploration rate. In the second experiment, we found that our algorithm is also able to converge to a stable policy when a single sparse reward is given. As a demonstrator, we used an environment where a robotic manipulator needs to 1) open a door, 2) press a button inside and 3) close the door in a sequence but only with a sparse reward given at the end. We implemented the fully converged policy on a real YuMi robot using policy transfer. Future work will include investigating if we can improve the efficiency of policy convergence by adjusting the proposing network’s distribution. Additionally, we will examine whether clustering can increase the efficiency for exploration. Moreover, we will look for a better reward function than KL divergence between Gaussian-based goal distribution and the trajectory distribution." }, { "heading": "Appendices", "text": "A DIFFERENCE BETWEEN COVERAGE AND h(S)\nSince the reward is given to an arbitrary state in the whole space, two metrics could be considered to evaluate the performance of the agent, namely the state distribution entropy h(S) and the coverage fc(St). The distribution of S can be estimated using any density estimator trained using St.\nThe coverage of St is defined as a mapping fc : Sn → R, K(St) :=\n∫\nS maxi(φ(si)) ds, where φ : [0, inf) → R is a radial basis function with a Gaussian kernel centred at si ∈ St, and n equals |St|. Intuitively, these two measures describe similar characteristics of the data distribution. However, they also have significant differences. One major difference is that when a new data point is presented in the state space, the coverage can only increase whilst the entropy can decrease if the new data point is given in an often visited area. Figure 8 illustrates the difference between coverage and density estimation when two and three points are given. Coverage gives a better intuition about where the agent should search for the sparse reward.\nTo find the sparse reward, we maximize the history states entropy to increase the coverage. A proof for one-dimensional state space is given in Appendix C. Proof of high dimensional cases is related to Kepler theorem Hales (2005) and is beyond the scope of this paper. We aim to maximize the entropy of the state density in the St, which isH(S)." }, { "heading": "B ESTIMATING STATE ENTROPY AND COVERAGE", "text": "In practice, we uniformly discretize the entire state space along each dimension and use the center of the discretized grids to estimate the entropy and the coverage. We do not check the validity of each grid. Therefore, there are grids that are not reachable from the initial state, and the maximum possible coverage is less than 1. The PointMaze environment is discretized to 50 · 50 grids. The DoorOpen environment is discretized to 10 · 10 · 10 · 2 · 10 grids. The entropy and coverage are summed over the discretized states.\nC PROOF OF RELATION BETWEEN COVERAGE AND h(S)\nIn this section, we prove Lemma C.1. Lemma C.1. Given N movable points P = {p1, p2, · · · , pN} distributed in R with boundary Bl ≤ pi ≤ Bh. Maximizing the coverage estimated from these points using a Gaussian kernel will make the points to be uniformly distributed within the boundary, thus maximizing the entropy of the points’ distribution.\nProof. According to the definition of coverage, the area under all Gaussian kernels could be written as :\nfc(P) = ∫ R max i (φ(pi)) dx (6)\nWe rewrite the equation as subtraction of all the areas and the overlapping areas. Fig 9 illustrates these concepts. The area from negative infinity to x under Gaussian distribution is defined as.\nΦ(x) = ∫ x −∞ φ(x)dx = 1 2 ( 1 + erf ( x√ 2 )) , (7)\nfc(P) = NΦ(∞)− N−1∑ i=1 2Φ( −ti 2 ) (8)\n= 1− N−1∑ i=1 erf ( −ti 2 √ 2 ) (9)\n(10)\nAs maximizing 1− ∑N−1 i=1 erf ( −ti 2 √ 2 ) is equivalent to minimizing ∑N−1 i=1 erf ( −ti 2 √ 2 ) . We formulate an optimization problem as follows:\nminimize t1,t2,··· ,ti N−1∑ i=1 erf ( −ti 2 √ 2 ) subject to ti ≥ 0,∀ti\nN−1∑ i=1 ti ≤ Bh −Bl,\nSolving this problem using the KKT conditions will give us that when ti = Bh−BlN−1 ,∀ti, the expression is minimized. This result shows that when the coverage estimated from these points is maximized, the points need to be distributed evenly within the boundary. As a consequence, the entropy of the distribution of these points is also maximized.\nD PROOF OF h(S|Z)− h(Z|S) ≥ h(N)− h(Z)\nIn this section, we show our analysis and proof of the expression\nh(S|Z)− h(Z|S) ≥ h(N)− h(Z).\nLet Z and S be jointly distributed continuous random variables, where S is related to Z through a conditional PDF f(s|z) defined for all z. The conditional PDF f(s|z) follows a Gaussian distribution centred at different z with standard deviation σ > 0.\nLemma D.1. S can be represented as the sum of Z and N, where N is a r.v. with Gaussian PDF of 0 mean and standard deviation σ, independent from Z.\nS = Z + N (11)\nProof\nUsing the law of total probability, the density of f(s) can be written as:\nf(s) = ∫ f(s|z)f(z)dz\n=\n∫ 1\nσ √ 2π e−(s−z) 2/2σ2f(z)dz\n= ∫ K(s− z)f(z)dz\n= (K ∗ f)(s), (12)\nwhere K is the density function of N. According to Theorem 7.1 in Grinstead & Snell (2012), when the density function f(s) is the convolution of the density function of G and N, S = G + N holds." }, { "heading": "Q.E.D.", "text": "Lemma D.2. When S = Z + N holds, the following expression\nh(S|Z)− h(Z|S) ≥ h(N)− h(Z) (13)\nholds." }, { "heading": "Proof.", "text": "Remark. Here we note that we will use following equations in later proof,\nh(Z) = h(Z|N) = h(Z + N|N) (14) h(N) = h(N|Z) = h(Z + N|Z). (15)\nIn Eq. 14 and Eq. 15, h(Z) = h(Z|N) and h(N) = h(N|Z) are true because Z and N are independent. Moreover, h(Z|N) = h(Z + N|N) and h(N|Z) = h(Z + N|Z) are true because there is no gain of information by adding what is given already.\nWe expand the Eq. 13 by replacing the S with Z + N and get\nh(S|Z)− h(Z|S) = h(Z + N|Z)− h(Z|Z + N) = h(N)− h(Z|Z + N) (16)\nNow we proceed to prove that h(Z|Z + N) = h(N|Z + N),\nh(Z|Z + N) = h(G + N|Z) + h(Z)− h(Z + N) = h(N) + h(Z)− h(Z + N) = h(N) + h(Z + N|N)− h(Z + N) = h(N,Z + N)− h(Z + N) = h(N|Z + N). (17)\nWe replace h(Z|Z + N) with h(N|Z + N) in Eq. 16 to have\nh(S|Z)− h(Z|S) = h(N)− h(N|Z + N) = I(Z + N;N)\n= I(S;N). (18)\nAccording to the property of mutual information, I(S;N) ≥ 0, with equality if and only if S and N are independent. As S = Z + N, I(S;N) = 0 is true only when the PDF of N is a Dirac delta function. In our case, PDF of N is a Gaussian function and σ > 0, as a consequence, the expression h(S|Z)− h(Z|S) > 0 holds. Additionally, since h(Z|Z + N) is the same as h(N|Z + N), by law of conditioning reduces entropy, we has an inequality as follows\nh(S|Z)− h(Z|S) = h(N)− h(N|Z + N) h(S|Z)− h(Z|S) = h(N)− h(Z|Z + N) h(S|Z)− h(Z|S) ≥ h(N)− h(Z). (19)" }, { "heading": "Q.E.D", "text": "If we link it back to the paper, Z is the novelty frontier reference points proposing distribution and S represents the achieved state distribution by executing goals sampled form Z. N defines the behaviour of the policy according to a given reference point. When N has a Dirac delta distribution, the policy is a goal-conditioned policy (as in Skew-Fit). When N is a Gaussian distribution, the policy moves around the reference point following a Gaussian distribution. Compared to the goal conditioned policy, the reference point-conditioned policy achieves higher state entropy. The extra amount of entropy equals to the mutual information I(Z + N;N).\nE IMPLEMENTATION DETAILS" }, { "heading": "E.1 ENVIRONMENT DETAILS", "text": "The three environments used in the experiments are implemented using Mujoco Todorov et al. (2012).\nPointMaze: In this environment, an agent travels in a maze contains narrow passages. The observation is the x, y position of the agent and the action is the velocity along the x, y axis. The maximum velocity for each dimension is 0.12. In the sparse reward experiment, the agent receives an extrinsic reward when the distance to the goal is less than 0.15.\nDoorOpen: In this environment, we use a single arm Yumi robot with 7 degrees of freedoms (DOFs). The robot is controlled in Cartesian space and the orientation of the end-effector is fixed. The robot can grab a door handle and open the door on a table. The maximum opening of the door is 90 degrees. The observation space is 5 dimensional, including the x, y, z position of the end-effector, the open/close status of the gripper and the opening angle of the door. The action is the velocity along x, y, z axis. The valid action space of the robot is 10cm×11cm×10cm. In the sparse reward experiment, the agent receives an extrinsic reward when the angle difference to the goal is less than 2 degrees.\nOpenPressClose: This environment contains a single arm Yumi robot, a box with a door and a button inside the box. The robot configuration is the same as the DoorOpen environment. The maximum opening of the door is 143 degrees. The observation space of the robot is 6 dimensional, including the x, y, z position of the end-effector, the open/close status of the gripper, the opening angle of the door and the status of the button. The action is the velocity along x, y, z axis. The valid action space of the robot is 2cm× 28.5cm× 16cm. In the sparse reward experiment, the agent needs to press the button down for more than 1cm and close the door completely.\nE.2 HYPERPARAMETER\nWe use the same network structure for all experiments. The policy network contains 5 fully-connected layers with the number of units in all layers 32, 64, 128, 64, 32. We use ReLU as the activation function and there is no activation for the output layer and for all experiments, the length of the trajectory is 200.\nThe RND contains 3 fully-connected layers with the number of units 32, 64, 64 in the random target network and contains 4 fully-connected layers with the number of units\n32, 64, 64, 128 in the prediction network.\nFor PPO, we use the batch of 5, 000 and 15 epochs per iteration. We update the state distribution and the goal distribution at every 5, 000 step." }, { "heading": "F ADDITIONAL RESULTS", "text": "The second experiment test whether the algorithm is able to converge to a given sparse reward after exploration. As mentioned in the section 4.2, five points are selected to test the convergence of the algorithm given a sparse reward. Fig 10 shows the exact points in the PointMaze and DoorOpen environments. The points are selected uniformly from the area of interests. For each point, twelve experiments have been executed to receive information about mean and the variance of the algorithm’s performance. Fig 11 and Fig 12 show the relationship between the convergence criteria and the number of iteration for PointMazz environment and DoorOpen environment respectively." } ]
2,019
SKEW-EXPLORE: LEARN FASTER IN CONTINUOUS SPACES WITH SPARSE REWARDS
SP:9043128647ca5b26b38c11af6fddf166e012a390
[ "This paper presents a novel meta reinforcement learning algorithm capable of meta-generalizing to unseen tasks. They make use of a learned objective function used in combination with DDPG style update. Results are presented on different combinations of meta-training and meta-testing on lunar, half cheetah, and hopper environments with a focus on meta-generalization to vastly different environments.", "The paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments. The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents. Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules. The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain. The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments. " ]
Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that decides how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms humanengineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.
[ { "affiliations": [], "name": "LEARNED OBJECTIVES" }, { "affiliations": [], "name": "Louis Kirsch" }, { "affiliations": [], "name": "Sjoerd van Steenkiste" }, { "affiliations": [], "name": "Jürgen Schmidhuber" } ]
[ { "authors": [ "Ferran Alet", "Martin F Schneider", "Tomas Lozano-Perez", "Leslie Pack Kaelbling" ], "title": "Meta-learning curiosity algorithms", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gómez Colmenarejo", "Matthew W. Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sarah Bechtle", "Artem Molchanov", "Yevgen Chebotar", "Edward Grefenstette", "Ludovic Righetti", "Gaurav Sukhatme", "Franziska Meier" ], "title": "Meta-learning via learned loss", "venue": null, "year": 1906 }, { "authors": [ "Yoshua Bengio", "Samy Bengio", "Jocelyn Cloutier" ], "title": "Learning a synaptic learning", "venue": "rule. Université de Montréal,", "year": 1990 }, { "authors": [ "Jeff Clune" ], "title": "AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence", "venue": "arXiv preprint arXiv:1905.10985,", "year": 2019 }, { "authors": [ "P Dayan", "G Hinton" ], "title": "Feudal Reinforcement Learning", "venue": "Advances in Neural Information Processing Systems (NIPS)", "year": 1993 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L. Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "RLˆ2: Fast Reinforcement Learning via Slow Reinforcement Learning", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Felix A Gers", "Jürgen Schmidhuber", "Fred Cummins" ], "title": "Learning to Forget: Continual Prediction with LSTM", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting GradientBased Meta-Learning as Hierarchical Bayes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "S Hochreiter", "J Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Sepp Hochreiter", "A. Steven Younger", "Peter R. Conwell" ], "title": "Learning to learn using gradient descent", "venue": "In International Conference on Artificial Neural Networks,", "year": 2001 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "VIME: Variational Information Maximizing Exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Rein Houthooft", "Richard Y. Chen", "Phillip Isola", "Bradly C. Stadie", "Filip Wolski", "Jonathan Ho", "Pieter Abbeel" ], "title": "Evolved Policy Gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castañeda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman", "Nicolas Sonnerat", "Tim Green", "Louise Deason", "Joel Z Leibo", "David Silver", "Demis Hassabis", "Koray Kavukcuoglu", "Thore Graepel" ], "title": "Human-level performance in 3D multiplayer games with population-based reinforcement learning", "venue": "Science (New York, N.Y.), 364(6443):859–865,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Werner M Kistler", "Wulfram Gerstner", "J Leo van Hemmen" ], "title": "Reduction of the Hodgkin-Huxley equations to a single-variable threshold model", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Learning to Optimize", "venue": "arXiv preprint arXiv:1606.01885,", "year": 2016 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Learning to Optimize Neural Nets", "venue": "arXiv preprint arXiv:1703.00441,", "year": 2017 }, { "authors": [ "Eric Liang", "Richard Liaw", "Philipp Moritz", "Robert Nishihara", "Roy Fox", "Ken Goldberg", "Joseph E Gonzalez", "Michael I Jordan", "Ion Stoica" ], "title": "Rllib: Abstractions for distributed reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Brian Cheung", "Jascha Sohl-Dickstein" ], "title": "Learning Unsupervised Learning Rules", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad" ], "title": "A Simple Neural Attentive Meta-Learner", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing Atari with Deep Reinforcement Learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Alex Nichol", "Vicki Pfau", "Christopher Hesse", "Oleg Klimov", "John Schulman Openai" ], "title": "Gotta Learn Fast: A New Benchmark for Generalization in RL", "venue": "arXiv preprint arXiv:1804.03720,", "year": 2018 }, { "authors": [ "Scott Niekum", "Lee Spector", "Andrew Barto" ], "title": "Evolution of reward functions for reinforcement learning", "venue": "In Proceedings of the 13th annual conference companion on Genetic and evolutionary computation,", "year": 2011 }, { "authors": [ "Emilio Parisotto", "Jimmy Lei Ba", "Ruslan Salakhutdinov" ], "title": "Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning", "venue": "arXiv preprint arXiv:1511.06342,", "year": 2015 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A. Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder", "others" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Andrei A. Rusu", "Sergio Gomez Colmenarejo", "Caglar Gulcehre", "Guillaume Desjardins", "James Kirkpatrick", "Razvan Pascanu", "Volodymyr Mnih", "Koray Kavukcuoglu", "Raia Hadsell" ], "title": "Policy Distillation", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-Learning with Latent Embedding Optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "MetaLearning with Memory-Augmented Neural Networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "J Schmidhuber" ], "title": "Learning to Generate Sub-Goals for Action Sequences", "venue": "Artificial Neural Networks,", "year": 1991 }, { "authors": [ "J Schmidhuber" ], "title": "On learning how to learn learning strategies", "venue": "Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München,", "year": 1994 }, { "authors": [ "J Schmidhuber", "J Zhao" ], "title": "Direct policy search and uncertain policy evaluation", "venue": "Technical Report IDSIA-50-98,", "year": 1998 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning", "venue": "Diploma thesis, Institut für Informatik, Technische Universität München,", "year": 1987 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Making the world differentiable: On Using Fully Recurrent Self-Supervised Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments", "venue": "Technical Report FKI-126-90 (revised), Institut für Informatik, Technische Universität München,", "year": 1990 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A Possibility for Implementing Curiosity and Boredom in Model-Building Neural Controllers", "venue": "Proc. of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats,", "year": 1991 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A self-referential weight matrix", "venue": "In Proceedings of the International Conference on Artificial Neural Networks,", "year": 1993 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust Region Policy Optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "HighDimensional Continuous Control Using Generalized Advantage Estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In 31st International Conference on Machine Learning, ICML 2014,", "year": 2014 }, { "authors": [ "Satinder Singh", "A.G. Barto", "Nuttapong Chentanez" ], "title": "Intrinsically motivated reinforcement learning", "venue": "18th Annual Conference on Neural Information Processing Systems (NIPS),", "year": 2004 }, { "authors": [ "Flood Sung", "Li Zhang", "Tao Xiang", "Timothy Hospedales", "Yongxin Yang" ], "title": "Learning to learn: Meta-critic networks for sample efficient learning", "venue": "arXiv preprint arXiv:1706.09529,", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "Théophane Weber", "Sébastien Racanière", "David P. Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomènech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li", "Razvan Pascanu", "Peter Battaglia", "David Silver", "Daan Wierstra" ], "title": "Imagination-Augmented Agents for Deep Reinforcement Learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "M Wiering", "J Schmidhuber" ], "title": "HQ-Learning: Discovering Markovian Subgoals for Non-Markovian Reinforcement Learning", "venue": "Technical Report IDSIA-95-96,", "year": 1996 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural Evolution Strategies", "venue": "IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence),", "year": 2008 }, { "authors": [ "R J Williams" ], "title": "On the Use of Backpropagation in Associative Reinforcement Learning", "venue": "In IEEE International Conference on Neural Networks, San Diego,", "year": 1988 }, { "authors": [ "Ronald J Williams" ], "title": "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Zhongwen Xu", "Hado Van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "In Advances in Neural Information Processing Systems, volume 2018-Decem, pp. 2396–2407,", "year": 2018 }, { "authors": [ "Jaesik Yoon", "Taesup Kim", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian Model-Agnostic Meta-Learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tianhe Yu", "Chelsea Finn", "Annie Xie", "Sudeep Dasari", "Tianhao Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "One-shot imitation from observing humans via domain-adaptive meta-learning", "venue": "International Conference on Learning Representations, Workshop Track,", "year": 2018 }, { "authors": [ "Zeyu Zheng", "Junhyuk Oh", "Satinder Singh" ], "title": "On learning intrinsic rewards for policy gradient methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The process of evolution has equipped humans with incredibly general learning algorithms. They enable us to solve a wide range of problems, even in the absence of a large number of related prior experiences. The algorithms that give rise to these capabilities are the result of distilling the collective experiences of many learners throughout the course of natural evolution. By essentially learning from learning experiences in this way, the resulting knowledge can be compactly encoded in the genetic code of an individual to give rise to the general learning capabilities that we observe today.\nIn contrast, Reinforcement Learning (RL) in artificial agents rarely proceeds in this way. The learning rules that are used to train agents are the result of years of human engineering and design, (e.g. Williams (1992); Wierstra et al. (2008); Mnih et al. (2013); Lillicrap et al. (2016); Schulman et al. (2015a)). Correspondingly, artificial agents are inherently limited by the ability of the designer to incorporate the right inductive biases in order to learn from previous experiences.\nSeveral works have proposed an alternative framework based on meta reinforcement learning (Schmidhuber, 1994; Wang et al., 2016; Duan et al., 2016; Finn et al., 2017; Houthooft et al., 2018; Clune, 2019). Meta-RL distinguishes between learning to act in the environment (the reinforcement learning problem) and learning to learn (the meta-learning problem). Hence, learning itself is now a learning problem, which in principle allows one to leverage prior learning experiences to meta-learn general learning rules that surpass human-engineered alternatives. However, while prior work found that learning rules could be meta-learned that generalize to slightly different environments or goals (Finn et al., 2017; Plappert et al., 2018; Houthooft et al., 2018), generalization to entirely different environments remains an open problem.\nIn this paper we present MetaGenRL1, a novel meta reinforcement learning algorithm that metalearns learning rules that generalize to entirely different environments. MetaGenRL is inspired by the process of natural evolution as it distills the experiences of many agents into the parameters of an objective function that decides how future individuals will learn. Similar to Evolved Policy Gradients (EPG; Houthooft et al. (2018)), it meta-learns low complexity neural objective functions that can be used to train complex agents with many parameters. However, unlike EPG, it is able to meta-learn using second-order gradients, which offers several advantages as we will demonstrate.\nWe evaluate MetaGenRL on a variety of continuous control tasks and compare to RL2 (Wang et al., 2016; Duan et al., 2016) and EPG in addition to several human engineered learning algorithms.\n1Code is available at http://louiskirsch.com/code/metagenrl\nCompared to RL2 we find that MetaGenRL does not overfit and is able to train randomly initialized agents using meta-learned learning rules on entirely different environments. Compared to EPG we find that MetaGenRL is more sample efficient, and outperforms significantly under a fixed budget of environment interactions. The results of an ablation study and additional analysis provide further insight into the benefits of our approach." }, { "heading": "2 PRELIMINARIES", "text": "Notation We consider the standard MDP Reinforcement Learning setting defined by a tuple e = (S,A, P, ρ0, r, γ, T ) consisting of states S, actions A, the transition probability distribution P : S × A × S → R+, an initial state distribution ρ0 : S → R+, the reward function r : S × A → [−Rmax, Rmax], a discount factor γ, and the episode length T . The objective for the probabilistic policy πφ : S ×A→ R+ parameterized by φ is to maximize the expected discounted return:\nEτ [ T−1∑ t=0 γtrt], where s0 ∼ ρ0(s0), at ∼ πφ(at|st), st+1 ∼ P (st+1|st, at), rt = r(st, at), (1)\nwith τ = (s0, a0, r0, s1, ..., sT−1, aT−1, rT−1).\nHuman Engineered Gradient Estimators A popular gradient-based approach to maximizing Equation 1 is REINFORCE (Williams, 1992). It directly differentiates Equation 1 with respect to φ using the likelihood ratio trick to derive gradient estimates of the form:\n∇φEτ [LREINF (τ, πφ)] := Eτ [∇φ T−1∑ t=0 log πφ(at|st) · T−1∑ t′=t γt ′−trt′)]. (2)\nAlthough this basic estimator is rarely used in practice, it has become a building block for an entire class of policy-gradient algorithms of this form. For example, a popular extension from Schulman et al. (2015b) combines REINFORCE with a Generalized Advantage Estimate (GAE) to yield the following policy gradient estimator:\n∇φEτ [LGAE(τ, πφ, V )] := Eτ [∇φ T−1∑ t=0 log πφ(at|st) ·A(τ, V, t)]. (3)\nwhere A(τ, V, t) is the GAE and V : S → R is a value function estimate. Several recent other extensions include TRPO (Schulman et al., 2015a), which discourages bad policy updates using trust regions and iterative off-policy updates, or PPO (Schulman et al., 2017), which offers similar benefits using only first order approximations.\nParametrized Objective Functions In this work we note that many of these human engineered policy gradient estimators can be viewed as specific implementations of a general objective function L that is differentiated with respect to the policy parameters:\n∇φEτ [L(τ, πφ, V )]. (4)\nHence, it becomes natural to consider a generic parametrization of L that, for various choices of parameters α, recovers some of these estimators. In this paper, we will consider neural objective functions where Lα is implemented by a neural network. Our goal is then to optimize the parameters α of this neural network in order to give rise to a new learning algorithm that best maximizes Equation 1 on an entire class of (different) environments." }, { "heading": "3 META-LEARNING NEURAL OBJECTIVES", "text": "In this work we propose MetaGenRL, a novel meta reinforcement learning algorithm that metalearns neural objective functions of the form Lα(τ, πφ, V ). MetaGenRL makes use of value functions and second-order gradients, which makes it more sample efficient compared to prior work (Duan et al., 2016; Wang et al., 2016; Houthooft et al., 2018). More so, as we will demonstrate, MetaGenRL meta-learns objective functions that generalize to vastly different environments.\nOur key insight is that a differentiable critic Qθ : S × A → R can be used to measure the effect of locally changing the objective function parameters α based on the quality of the corresponding policy gradients. This enables a population of agents to use and improve a single parameterized objective function Lα through interacting with a set of (potentially different) environments. During evaluation (meta-test time), the meta-learned objective function can then be used to train a randomly initialized RL agent in a new environment." }, { "heading": "3.1 FROM DDPG TO GRADIENT-BASED META-LEARNING OF NEURAL OBJECTIVES", "text": "We will formally introduce MetaGenRL as an extension of the DDPG actor-critic framework (Silver et al., 2014; Lillicrap et al., 2016). In DDPG, a parameterized critic of the form Qθ : S × A → R transforms the non-differentiable RL reward maximization problem into a myopic value maximization problem for any st ∈ S. This is done by alternating between optimization of the critic Qθ and the (here deterministic) policy πφ. The critic is trained to minimize the TD-error by following:\n∇θ ∑\n(st,at,rt,st+1)\n(Qθ(st, at)− yt)2, where yt = rt + γ ·Qθ(st+1, πφ(st+1)), (5)\nand the dependence of yt on the parameter vector θ is ignored. The policy πφ is improved to increase the expected return from arbitrary states by following the gradient∇φ ∑ st Qθ(st, πφ(st)). Both gradients can be computed entirely off-policy by sampling trajectories from a replay buffer.\nMetaGenRL builds on this idea of differentiating the critic Qθ with respect to the policy parameters. It incorporates a parameterized objective function Lα that is used to improve the policy (i.e. by following the gradient ∇φLα), which adds one extra level of indirection: The critic Qθ improves Lα, while Lα improves the policy πφ. By first differentiating with respect to the objective function parameters α, and then with respect to the policy parameters φ, the critic can be used to measure the effect of updating πφ using Lα on the estimated return2: ∇αQθ(st, πφ′(st)), where φ′ = φ−∇φLα(τ, x(φ), V ). (6) This constitutes a type of second order gradient∇α∇φ that can be used to meta-train Lα to provide better updates to the policy parameters in the future. In practice we will use batching to optimize Equation 6 over multiple trajectories τ .\nSimilarly to the policy-gradient estimators from Section 2, the objective function Lα(τ, x(φ), V ) receives as inputs an episode trajectory τ = (s0:T−1, a0:T−1, r0:T−1), the value function estimates\n2In case of a probabilistic policy πφ(at|st) the following becomes an expectation under πφ and a reparameterizable form is required (Williams, 1988; Kingma & Welling, 2014; Rezende et al., 2014). Here we focus on learning deterministic target policies.\nAlgorithm 1 MetaGenRL: Meta-Training Require: p(e) a distribution of environments P ⇐ {(e1 ∼ p(e), φ1, θ1, B1 ← ∅), . . .} . Randomly initialize population of agents Randomly initialize objective function Lα while Lα has not converged do\nfor e, φ, θ, B ∈ P do . For each agent i in parallel if extend replay buffer B then\nExtend B using πφ in e Sample trajectories from B Update critic Qθ using TD-error Update policy by following ∇φLα Compute objective function gradient ∆i for agent i according to Equation 6\nSum gradients ∑ i ∆i to update Lα\nV , and an auxiliary input x(φ) (previously πφ) that can be differentiated with respect to the policy parameters. The latter is critical to be able to differentiate with respect to φ and in the simplest case it consists of the action as predicted by the policy. While Equation 6 is used for meta-learning Lα, the objective functionLα itself is used for policy learning by following∇φLα(τ, x(φ), V ). See Figure 1 for an overview. MetaGenRL consists of two phases: During meta-training, we alternate between critic updates, objective function updates, and policy updates to meta-learn an objective function Lα as described in Algorithm 1. During meta-testing in Algorithm 2, we take the learned objective function Lα and keep it fixed while training a randomly initialized policy in a new environment to assess its performance.\nWe note that the inputs to Lα are sampled from a replay buffer rather than solely using on-policy data. If Lα were to represent a REINFORCE-type objective then it would mean that differentiating Lα yields biased policy gradient estimates. In our experiments we will find that the gradients from Lα work much better in comparison to a biased off-policy REINFORCE algorithm, and to an importance-sampled unbiased REINFORCE algorithm, while also improving over the popular on-policy REINFORCE and PPO algorithms." }, { "heading": "3.2 PARAMETRIZING THE OBJECTIVE FUNCTION", "text": "We will implement Lα using an LSTM (Gers et al., 2000; Hochreiter & Schmidhuber, 1997) that iterates over τ in reverse order and depends on the current policy action πφ(st) (see Figure 2). At every time-step Lα receives the reward rt, taken action at, predicted action by the current policy πφ(st), the time t, and value function estimates Vt, Vt+13. At each step the LSTM outputs the objective value lt, all of which are summed to yield a single scalar output value that can be differentiated with respect to φ. In order to accommodate varying action dimensionalities across different environments, both πφ(st) and at are first convolved and then averaged to obtain an action embedding that does not depend on the action dimensionality. Additional details, including suggestions for more expressive alternatives are available in Appendix B.\nBy presenting the trajectory in reverse order to the LSTM (and Lα correspondingly), it is able to assign credit to an action at based on its future impact on the reward, similar to policy gradient estimators. More so, as a general function approximator using these inputs, the LSTM is in principle able to learn different variance and bias reduction techniques, akin to advantage estimates, generalized advantage estimates, or importance weights4. Due to these properties, we expect the class of objective functions that is supported to somewhat relate to a REINFORCE (Williams, 1992) estimator that uses generalized advantage estimation (Schulman et al., 2015b).\n3The value estimates are derived from the Q-function, i.e. Vt = Qθ(st, πφ(st)), and are treated as a constant input. Hence, the gradient∇φLα can not flow backwards through Qθ , which ensures that Lα can not naively learn to implement a DDPG-like objective function.\n4We note that in practice it is is difficult to assess whether the meta-learned object function incorporates bias / variance reduction techniques, especially because MetaGenRL is unlikely to recover known techniques.\nAlgorithm 2 MetaGenRL: Meta-Testing Require: A test environment e, and an\nLSTM\n!lt\n!at, πϕ(st)\nconv !rt, Vt, Vt+1, t" }, { "heading": "3.3 GENERALITY AND EFFICIENCY OF METAGENRL", "text": "MetaGenRL offers a general framework for meta-learning objective functions that can represent a wide range of learning algorithms. In particular, it is only required that both πφ and Lα can be differentiated w.r.t. to the policy parameters φ. In the present work, we use this flexibility to leverage population-based meta-optimization, increase sample efficiency through off-policy secondorder gradients, and to improve the generalization capabilities of meta-learned objective functions.\nPopulation-Based A general objective function should be applicable to a wide range of environments and agent parameters. To this extent MetaGenRL is able to leverage the collective experience of multiple agents to perform meta-learning by using a single objective function Lα shared among a population of agents that each act in their own (potentially different) environment. Each agent locally computes Equation 6 over a batch of trajectories, and the resulting gradients are combined to update Lα. Thus, the relevant learning experience of each individual agent is compressed into the objective function that is available to the entire population at any given time.\nSample Efficiency An alternative to learning neural objective functions using a population of agents is through evolution as in EPG (Houthooft et al., 2018). However, we expect meta-learning using second-order gradients as in MetaGenRL to be much more sample efficient. This is due to off-policy training of the objective function Lα and its subsequent off-policy use to improve the policy. Indeed, unlike in evolution there is no need to train multiple randomly initialized agents in their entirety in order to evaluate the objective function, thus speeding up credit assignment. Rather, at any point in time, any information that is deemed useful for future environment interactions can directly be incorporated into the objective function. Finally, using the formulation in Equation 6 one can measure the effects of improving the policy using Lα for multiple steps by increasing the corresponding number of gradient steps before applyingQθ, which we will explore in Section 5.2.3.\nMeta-Generalization The focus of this work is to learn general learning rules that during testtime can be applied to vastly different environments. A strict separation between the policy and the learning rule, the functional form of the latter, and training across many environments all contribute to this. Regarding the former, a clear separation between the policy and the learning rule as in MetaGenRL is expected to be advantageous for two reasons. Firstly, it allows us to specify the number of parameters of the learning rule independent of the policy and critic parameters. For example, our implementation of Lα uses only 15K parameters for the objective function compared to 384K parameters for the policy and critic. Hence, we are able to only use a short description length for the learning rule. A second advantage that is gained is that the meta-learner is unable to directly change the policy and must, therefore, learn to make use of the objective function. This makes it difficult for the meta-learner to overfit to the training environments." }, { "heading": "4 RELATED WORK", "text": "Among the earliest pursuits in meta-learning are meta-hierarchies of genetic algorithms (Schmidhuber, 1987) and learning update rules in supervised learning (Bengio et al., 1990). While the former introduced a general framework of entire meta-hierarchies, it relied on discrete non-differentiable programs. The latter introduced local update rules that included free parameters, which could be\nlearned using gradients in a supervised setting. Schmidhuber (1993) introduced a differentiable self-referential RNN that could address and modify its own weights, albeit difficult to learn.\nHochreiter et al. (2001) introduced differentiable meta-learning using RNNs to scale to larger problem instances. By giving an RNN access to its prediction error, it could implement its own metalearning algorithm, where the weights are the meta-learned parameters, and the hidden states the subject of learning. This was later extended to the RL setting (Wang et al., 2016; Duan et al., 2016; Santoro et al., 2016; Mishra et al., 2018) (here refered to as RL2). As we show empirically in our paper, meta-learning with RL2 does not generalize well. It lacks a clear separation between policy and objective function, which makes it easy to overfit on training environments. This is exacerbated by the imbalance of O(n2) meta-learned parameters to learn O(n) activations, unlike in MetaGenRL.\nMany other recent meta-learning algorithms learn a policy parameter initialization that is later finetuned using a fixed reinforcement learning algorithm (Finn et al., 2017; Schulman et al., 2017; Grant et al., 2018; Yoon et al., 2018). Different from MetaGenRL, these approaches use second order gradients on the same policy parameter vector instead of using a separate objective function. Albeit in principle general (Finn & Levine, 2018), the mixing of policy and learning algorithm leads to a complicated way of expressing general update rules. Similar to RL2, adaptation to related tasks is possible, but generalization is difficult (Houthooft et al., 2018).\nObjective functions have been learned prior to MetaGenRL. Houthooft et al. (2018) evolve an objective function that is later used to train an agent. Unlike MetaGenRL, this approach is extremely costly in terms of the number of environment interactions required to evaluate and update the objective function. Most recently, Bechtle et al. (2019) introduced learned loss functions for reinforcement learning that also make use of second-order gradients, but use a policy gradient estimator instead of a Q-function. Similar to other work, their focus is only on narrow task distributions. Learned objective functions have also been used for learning unsupervised representations (Metz et al., 2019), DDPG-like meta-gradients for hyperparameter search (Xu et al., 2018), and learning from human demonstrations (Yu et al., 2018). Concurrent to our work, Alet et al. (2020) uses techniques from architecture search to search for viable artificial curiosity objectives that are composed of primitive objective functions.\nLi & Malik (2016; 2017) and Andrychowicz et al. (2016) conduct meta-learning by learning optimizers that update parameters φ by modulating the gradient of some fixed objective function L: ∆φ = fα(∇φL) where α is learned. They differ from MetaGenRL in that they only modulate the gradient of a fixed objective function L instead of learning L itself.\nAnother connection exists to meta-learned intrinsic reward functions (Schmidhuber, 1991a; Dayan & Hinton, 1993; Wiering & Schmidhuber, 1996; Singh et al., 2004; Niekum et al., 2011; Zheng et al., 2018; Jaderberg et al., 2019). Choosing∇φLα = ∇̃φ ∑T t=1 r̄t(τ), where r̄t is a meta-learned reward and ∇̃θ is a gradient estimator (such as a value based or policy gradient based estimator) reveals that meta-learning objective functions includes meta-learning the gradient estimatior ∇̃ itself as long as it is expressible by a gradient ∇θ on an objective Lα. In contrast, for intrinsic reward functions, the gradient estimator ∇̃ is normally fixed. Finally, we note that positive transfer between different tasks (reward functions) as well as environments (e.g. different Atari games) has been shown previously in the context of transfer learning (Kistler et al., 1997; Parisotto et al., 2015; Rusu et al., 2016; 2019; Nichol et al., 2018) and meta-critic learning across tasks (Sung et al., 2017). In contrast to this work, the approaches that have shown to be successful in this domain rely entirely on human-engineered learning algorithms." }, { "heading": "5 EXPERIMENTS", "text": "We investigate the learning and generalization capabilities of MetaGenRL on several continuous control benchmarks including HalfCheetah (Cheetah) and Hopper from MuJoCo (Todorov et al., 2012), and LunarLanderContinuous (Lunar) from OpenAI gym (Brockman et al., 2016). These environments differ significantly in terms of the properties of the underlying system that is to be controlled, and in terms of the dynamics that have to be learned to complete the environment. Hence, by training meta-RL algorithms on one environment and testing on other environments they provide a reasonable measure of out-of-distribution generalization.\nIn our experiments, we will mainly compare to EPG and to RL2 to evaluate the efficacy of our approach. We will also compare to several fixed model-free RL algorithms to measure how well the algorithms meta-learned by MetaGenRL compare to these handcrafted alternatives. Unless otherwise mentioned, we will meta-train MetaGenRL using 20 agents that are distributed equally over the indicated training environments5. Meta-learning uses clipped double-Q learning, delayed policy & objective updates, and target policy smoothing from TD3 (Fujimoto et al., 2018). We will allow for 600K environment interactions per agent during meta-training and then meta-test the objective function for 1M interactions. Further details are available in Appendix B." }, { "heading": "5.1 COMPARISON TO PRIOR WORK", "text": "Evaluating on previously seen environments We meta-train MetaGenRL on Lunar and compare its ability to train a randomly initialized agent at test-time (i.e. using the learned objective function and keeping it fixed) to DDPG, PPO, and on- and off-policy REINFORCE (both using GAE) across multiple seeds. Figure 3a shows that MetaGenRL markedly outperforms both the REINFORCE baselines and PPO. Compared to DDPG, which finds the optimal policy, MetaGenRL performs only slightly worse on average although the presence of outliers increases its variance. In particular, we find that some meta-test agents get ‘stuck’ for some time before reaching the optimal policy (see Section A.2 for additional analysis). Indeed, when evaluating only the best meta-learned objective function that was obtained during meta-training (MetaGenRL (best objective func) in Figure 3a) we are able to observe a strong reduction in variance and even better performance.\nWe also report results (Figure 3a) when meta-training MetaGenRL on both Lunar and Cheetah, and compare to EPG and RL2 that were meta-trained on these same environments6. For MetaGenRL we were able to obtain similar performance to meta-training on only Lunar in this case. In contrast, for EPG it can be observed that even one billion environment interactions is insufficient to find a good objective function (in Figure 3a quickly dropping below -300). Finally, we find that RL2 reaches the optimal policy after 100 million meta-training iterations, and that its performance is unaffected by additional steps during testing on Lunar. We note that RL2 does not separate the policy and the learning rule and indeed in a similar ‘within distribution’ evaluation, RL2 was found successful (Wang et al., 2016; Duan et al., 2016).\n5An ablation study in Section A.3 revealed that a large number of agents is indeed required. 6In order to ensure a good baseline we allowed for a maximum of 100M environment interactions for RL2 and 1B for EPG, which is more than eight / eighty times the amount used by MetaGenRL. Regarding EPG, this did require us to reduce the total number of seeds to 3 meta-train × 2 meta-test seeds.\nTable 1 provides a similar comparison for two other environments. Here we find that in general MetaGenRL is able to outperform the REINFORCE baselines and PPO, and in most cases (except for Cheetah) performs similar to DDPG7. We also find that MetaGenRL consistently outperforms EPG, and often RL2. For an analysis of meta-training on more than two environments we refer to Appendix A.\nGeneralization to vastly different environments We evaluate the same objective functions learned by MetaGenRL, EPG and the recurrent dynamics by RL2 on Hopper, which is significantly different compared to the meta-training environments. Figure 3b shows that the learned objective function by MetaGenRL continues to outperform both PPO and our implementations of REINFORCE, while the best performing configuration is even able to outperform DDPG.\nWhen comparing to related meta-RL approaches, we find that MetaGenRL is significantly better in this case. The performance of EPG remains poor, which was expected given what was observed on previously seen environments. On the other hand, we now find that the RL2 baseline fails completely (resulting in a flat low-reward evaluation), suggesting that the learned learning rule that was previously found to be successful is in fact entirely overfitted to the environments that were seen during meta-training. We were able to observe similar results when using different train and test environment splits as reported in Table 1, and in Appendix A." }, { "heading": "5.2 ANALYSIS", "text": "" }, { "heading": "5.2.1 META-TRAINING PROGRESSION OF OBJECTIVE FUNCTIONS", "text": "Previously we focused on test-time training randomly initialized agents using an objective function that was meta-trained for a total of 600K steps (corresponding to a total of 12M environment interactions across the entire population). We will now investigate the quality of the objective functions during meta-training.\nFigure 4 displays the result of evaluating an objective function on Hopper at different intervals during meta-training on Cheetah and Lunar. Initially (28K steps) it can be seen that due to lack of meta-training there is only a marginal improvement in the return obtained during test time. However, after only meta-training for 86K steps we find (perhaps surprisingly) that the meta-trained\n7We emphasize that the neural objective function under consideration is unable to implement DDPG and only uses a constant value estimate (i.e. ∇φV = 0 by using gradient stopping) during meta testing.\nobjective function is already able to make consistent progress in optimizing a randomly initialized agent during test-time. On the other hand, we observe large variances at test-time during this phase of meta-training. Throughout the remaining stages of meta-training we then observe an increase in convergence speed, more stable updates, and a lower variance across seeds." }, { "heading": "5.2.2 ABLATION STUDY", "text": "We conduct an ablation study of the neural objective function that was described in Section 3.2. In particular, we assess the dependence of Lα on the value estimates Vt,Vt+1 and on the time component that could to some extent be learned. Other ablations, including limiting access to the action chosen or to the received reward, are expected to be disastrous for generalization to any other environment (or reward function) and therefore not explored.\nDependence on t We use a parameterized objective function of the form Lα(at, rt, Vt, πφ(st)|t ∈ 0, ..., T − 1) as in Figure 2 except that it does not receive information about the time-step t at each step. Although information about the current time-step is required in order to learn (for example) a generalized advantage estimate (Schulman et al., 2015b), the LSTM could in principle learn such time tracking on it own, and we expect only minor effects on meta-training and during meta-testing. Indeed in Figure 5b it can be seen that the neural objective function performs well without access to t, although it converges slower on Cheetah during meta-training (Figure 5a).\nDependence on V We use a parameterized objective function of the form Lα(at, rt, t, πφ(st)|t ∈ 0, ..., T − 1) as in Figure 2 except that it does not receive any information about the value estimates at time-step t. There exist reinforcement learning algorithms that work without value function estimates (eg. Williams (1992); Schmidhuber & Zhao (1998)), although in the absence of an alternative baseline these often have a large variance. Similar results are observed for this ablation in Figure 5a\nduring meta-training where a possibly large variance appears to affect meta-training. Correspondingly during test-time (Figure 5b) we do not find any meaningful training progress to take place. In contrast, we find that we can remove the dependence on one of the value function estimates, i.e. remove Vt+1 but keep Vt, which during some runs even increases performance." }, { "heading": "5.2.3 MULTIPLE GRADIENT STEPS", "text": "We analyze the effect of making multiple gradient updates to the policy using Lα before applying the critic to compute second-order gradients with respect to the objective function parameters as in Equation 6. While in previous experiments we have only considered applying a single update, multiple gradient updates might better capture long term effects of the objective function. At the same time, moving further away from the current policy parameters could reduce the overall quality of the second-order gradients. Indeed, in Figure 6 it can be observed that using 3 gradient steps already slightly increases the variance during test-time training on Hopper and Cheetah after metatraining on LunarLander and Cheetah. Similarly, we find that further increasing the number of gradient steps to 5 harms performance." }, { "heading": "6 CONCLUSION", "text": "We have presented MetaGenRL, a novel off-policy gradient-based meta reinforcement learning algorithm that leverages a population of DDPG-like agents to meta-learn general objective functions. Unlike related methods the meta-learned objective functions do not only generalize in narrow task distributions but show similar performance on entirely different tasks while markedly outperforming REINFORCE and PPO. We have argued that this generality is due to MetaGenRL’s explicit separation of the policy and learning rule, the functional form of the latter, and training across multiple agents and environments. Furthermore, the use of second order gradients increases MetaGenRL’s sample efficiency by several orders of magnitude compared to EPG (Houthooft et al., 2018).\nIn future work, we aim to further improve the learning capabilities of the meta-learned objective functions, including better leveraging knowledge from prior experiences. Indeed, in our current implementation, the objective function is unable to observe the environment or the hidden state of the (recurrent) policy. These extensions are especially interesting as they may allow more complicated curiosity-based (Schmidhuber, 1991b; 1990; Houthooft et al., 2016; Pathak et al., 2017) or model-based (Schmidhuber, 1990; Weber et al., 2017; Ha & Schmidhuber, 2018) algorithms to be learned. To this extent, it will be important to develop introspection methods that analyze the learned objective function and to scale MetaGenRL to make use of many more environments and agents." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank Paulo Rauber, Imanol Schlag, and the anonymous reviewers for their feedback. This work was supported by the ERC Advanced Grant (no: 742870) and computational resources by the Swiss National Supercomputing Centre (CSCS, project: s978). We also thank NVIDIA Corporation for donating a DGX-1 as part of the Pioneers of AI Research Award and to IBM for donating a Minsky machine." }, { "heading": "A ADDITIONAL RESULTS", "text": "A.1 ALL TRAINING AND TEST REGIMES\nIn the main text, we have shown several combinations of meta-training, and testing environments. We will now show results for all combinations, including the respective human engineered baselines.\nHopper On Hopper (Figure 7) we find that MetaGenRL works well, both in terms of generalization to previously seen environments, and to unseen environments. The PPO, REINFORCE, RL2, and EPG baselines are outperformed significantly. Regarding RL2 we observe that it is only able to obtain reward when Hopper was included during meta-training, although its performance is generally poor. Regarding EPG, we observe some learning progress during meta-testing on Hopper after meta-training on Cheetah and Hopper (Figure 7a), although it drops back down quickly as test-time training proceeds. In contrast, when meta-testing on Hopper after meta-training on Cheetah and Lunar (Figure 7b) no test-time training progress is observed at all.\nCheetah Similar results are observed in Figure 8 for Cheetah, where MetaGenRL outperforms PPO and REINFORCE significantly. On the other hand, it can be seen that DDPG notably outperforms MetaGenRL on this environment. It will be interesting to further study these differences in the future to improve the expressibility of our approach. Regarding RL2 and EPG only within distribution generalization results are available due to Cheetah having larger observations and / or action spaces compared to Hopper and Lunar. We observe that RL2 performs similar to our earlier findings on Hopper but significantly improves in terms of within-distribution generalization (likely due to greater overfitting, as was consistently observed for other splits). EPG shows initially more promise on within distribution generalization (Figure 8a), but ends up like before.\nLunar On Lunar (Figure 9) we find that MetaGenRL is only marginally better compared to the REINFORCE and PPO baselines in terms of within distribution generalization and worse in terms of out of distribution generalization. Analyzing this result reveals that although many of the runs train rather well, some get stuck during the early stages of training without or only delayed recovering. These outliers lead to a seemingly very large variance for MetaGenRL in Figure 9b. We will provide a more detailed analysis of this result in Section A.2. If we focus on the best performing objective function then we observe competitive performance to DDPG (Figure 9a). Nonetheless, we notice that the objective function trained on Hopper generalizes worse to Lunar, despite our earlier result that objective functions trained on Lunar do in fact generalize well to Hopper. MetaGenRL is still able to outperform both RL2 and EPG in terms of out of distribution generalization. We do note\nthat EPG is able to meta-learn objective functions that are able to improve to some extent during test time.\nComparing final scores An overview of the final scores that were obtained for MetaGenRL in comparison to the human engineered baselines is shown in Table 2. It can be seen that MetaGenRL outperforms PPO and off-/on-policy REINFORCE in most configurations while DDPG with TD3 tricks remains stronger on two of the three environments. Note that DDPG is currently not among the representable algorithms by MetaGenRL.\nA.2 STABILITY OF LEARNED OBJECTIVE FUNCTIONS\nIn the results presented in Figure 9 on Lunar we observed a seemingly large variance for MetaGenRL that was due to outliers. Indeed, when analyzing the individual runs meta-trained on Lunar and tested on Lunar we found that that one of the runs converged to a local optimum early on during\ntraining and was unable to recover from this afterwards. On the other hand, we also observed that runs can be ‘stuck’ for a long time to then make very fast learning progress. It suggests that the objective function may sometimes experience difficulties in providing meaningful updates to the policy parameters during the early stages of training.\nWe have further analyzed this issue by evaluating one of the objective functions at several intervals throughout meta-training in Figure 10. From the meta-training curve (bottom) it can be seen that meta-training in Lunar converges very early. This means that from then on, updates to the objective function will be based on mostly converged policies. As the test-time plots show, these additional updates appear to negatively affect test-time performance. We hypothesize that the objective function essentially ‘forgets’ about the early stages of training a randomly initialized agent, by only incorporating information about good performing agents. A possible solution to this problem would be to keep older policies in the meta-training agent population or use early stopping.\nFinally, if we exclude four random seeds (of 12), we indeed find a significant reduction in the variance (and increase in the mean) of the results observed for MetaGenRL (see Figure 11).\nA.3 ABLATION OF AGENT POPULATION SIZE AND UNIQUE ENVIRONMENTS\nIn our experiments we have used a population of 20 agents during meta-training to ensure diversity in the conditions under which the objective function needs to optimize. The size of this population is a crucial parameter for a stable meta-optimization. Indeed, in Figure 12 it can be seen that metatraining becomes increasingly unstable as the number of agents in the population decreases.\nUsing a similar argument, one would expect to gain from increasing the number of distinct environments (or agents) during meta-training. In order to verify this, we have evaluated two additional\nsettings: Meta-training on Cheetah & Lunar & Walker & Ant with 20 and 40 agents respectively. Figure 13 shows the result of meta-testing on Hopper for these experiments (also see the final results reported for 40 agents in Table 2). Unexpectedly, we find that increasing the number of distinct environments does not yield a significant improvement and, in fact, sometimes even decrease performance. One possibility is that this is due to the simple form of the objective function under consideration, which has no access to the environment observations to efficiently distinguish between them. Another possibility is that MetaGenRL’s hyperparameters require additional tuning in order to be compatible with these setups." }, { "heading": "B EXPERIMENT DETAILS", "text": "In the following we describe all experimental details regarding the architectures used, meta-training, hyperparameters, and baselines. The code to reproduce our experiments is available at http: //louiskirsch.com/code/metagenrl.\nB.1 NEURAL OBJECTIVE FUNCTION ARCHITECTURE\nNeural Architecture In this work we use an LSTM to implement the objective function (Figure 2). The LSTM runs backwards in time over the state, action, and reward tuples that were encountered during the trajectory τ under consideration. At each step t the LSTM receives as input the reward rt, value estimates of the current and previous state Vt, Vt+1, the current timestep t and finally the action that was taken at the current timestep at in addition to the action as determined by the current policy πφ(st). The actions are first processed by one dimensional convolutional layers striding over the action dimension followed by a reduction to the mean. This allows for different action sizes between environments. Let A(B) ∈ R1×D be the action from the replay buffer, A(π) ∈ R1×D be the action predicted by the policy, and W ∈ R2×N a learnable matrix corresponding to N outgoing units, then the actions are transformed by\n1\nD D∑ i=1 ([A(B), A(π)]TW )i, (7)\nwhere [a, b] is a concatenation of a and b along the first axis. This corresponds to a convolution with kernel size 1 and stride 1. Further transformations with non-linearities can be added after applying W , if necessary. We found it helpful (but not strictly necessary) to use ReLU activations for half of the units and square activations for the other half.\nAt each time-step the LSTM outputs a scalar value lt (bounded between−η and η using a scaled tanh activation), which are summed to obtain the value of the neural objective function. Differentiating this value with respect to the policy parameters φ then yields gradients that can be used to improve πφ. We only allow gradients to flow backwards through πφ(st) to φ. This implementation is closely related to the functional form of a REINFORCE (Williams, 1992) estimator using the generalized advantage estimation (Schulman et al., 2015b).\nAll feed-forward networks (critic and policy) use ReLU activations and layer normalization (Ba et al., 2016). The LSTM uses tanh activations for cell and hidden state transformations, sigmoid activations for the gates. The input time t is normalized between 0 at the beginning of the episode and 1 at the final transition. Any other hyper-parameters can be seen in Table 3.\nExtensibility The expressability of the objective function can be further increased through several means. One possibility is to add the entire sequence of state observations o1:T to its inputs, or by introducing a bi-directional LSTM. Secondly, additional information about the policy (such as the hidden state of a recurrent policy) can be provided to L. Although not explored in this work, this would in principle allow one to learn an objective that encourages certain representations to emerge, e.g. a predictive representation about future observations, akin to a world model (Schmidhuber, 1990; Ha & Schmidhuber, 2018; Weber et al., 2017). In turn, these could create pressure to adapt the policy’s actions to explore unknown dynamics in the environment (Schmidhuber, 1991b; 1990; Houthooft et al., 2016; Pathak et al., 2017).\nB.2 META-TRAINING\nAnnealing with DDPG At the beginning of meta-training (learning Lα), the objective function is randomly initialized and thus does not make sensible updates to the policies. This can lead to irreversibly breaking the policies early during training. Our current implementation circumvents this issue by linearly annealing ∇φLα the first 10k timesteps (∼ 2% of all timesteps) with DDPG ∇φQθ(st, πφ(st)). Preliminary experiments suggested that an exponential learning rate schedule on the gradient of ∇φLα for the first 10k steps can replace the annealing with DDPG. The learning rate anneals exponentially between a learning rate of zero and 1e-3. However, in some rare cases this may still lead to unsuccessful training runs, and thus we have omitted this approach from the present work.\nStandard training During training, the critic is updated twice as many times as the policy and objective function, similar to TD3 (Fujimoto et al., 2018). One gradient update with data sampled from the replay buffer is applied for every timestep collected from the environment. The gradient with respect to φ in Equation 6 is combined with φ using a fixed learning rate in the standard way, all other parameter updates use Adam (Kingma & Ba, 2015) with the default parameters. Any other hyper-parameters can be seen in Table 3 and Table 4.\nUsing additional gradient steps In our experiments (Section 5.2.3) we analyzed the effect of applying multiple gradient updates to the policy using Lα before applying the critic to compute second-order gradients with respect to the objective function parameters. For two updates, this gives\n∇αQθ(st, πφ†(st)) with φ† = φ′ −∇φ′Lα(τ1, x(φ′), V ) and φ′ = φ−∇φLα(τ2, x(φ), V )\n(8)\nand can be extended to more than two correspondingly. Additionally, we use disjoint mini batches of data τ : τ1, τ2. When updating the policy using ∇φLα we continue to use only a single gradient step.\nB.3 BASELINES\nRL2 The implementation for RL2 mimics the paper by Duan et al. (Duan et al., 2016). However, we were unable to achieve good results with TRPO (Schulman et al., 2015a) on the MuJoCo environments and thus used PPO (Schulman et al., 2017) instead. The PPO hyperparameters and implementation are taken from rllib (Liang et al., 2018). Our implementation uses an LSTM with 64 units and does not reset the state of the LSTM for two episodes in sequence. Resetting after additional episodes were given did not improve training results. Different action and observation dimensionalities across environments were handled by using an environment wrapper that pads both with zeros appropriately.\nEPG We use the official EPG code base https://github.com/openai/EPG from the original paper (Houthooft et al., 2018). The hyperparameters are taken from the paper, V = 64 noise vectors, an update frequency of M = 64, and 128 updates for every inner loop, resulting in an inner loop length of 8196 steps. During meta-test training, we run with the same update frequency for a total of 1 million steps.\nPPO & On-Policy REINFORCE with GAE We use the tuned implementations from https: //spinningup.openai.com/en/latest/spinningup/bench.html which include a GAE (Schulman et al., 2015b) baseline.\nOff-Policy Reinforce with GAE The implementation is equivalent to MetaGenRL except that the objective function is fixed to be the REINFORCE estimator with a GAE (Schulman et al., 2015b) baseline. Thus, experience is sampled from a replay buffer. We have also experimented with an importance weighted unbiased estimator but this resulted in poor performance.\nDDPG Our implementation is based on https://spinningup.openai.com/en/ latest/spinningup/bench.html and uses the same TD3 tricks (Fujimoto et al., 2018) and hyperparameters (where applicable) that MetaGenRL uses.\nTable 4: Training hyperparameters\nParameter Value" } ]
2,020
null
SP:f48d609519e10cdf6de5dd0341edd5544d96402c
[ "The paper examines the common practice of performing model selection by choosing the model that maximizes validation accuracy. In a setting where there are multiple tasks, the average validation error hides performance on individual tasks, which may be relevant. The paper casts multi-class image classification as a multi-task problem, where identifying each different class is a different task.", "Model validation curve typically aggregates accuracies of all labels. This paper investigates the fine-grained per-label model validation curve. It shows that the optimal epoch varies by label. The paper proposes a visualization method to detect if there is a disparity between the per-label curves and the summarized validation curve. It also proposes two methods to exploit per-label metrics into model evaluation and selection. The experiments use three datasets: CIFAR 100, Tiny ImageNet, PadChest." ]
The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn’t able to reflect if all the tasks are at their validation optimum even if the summarized curve might be. In this work, we explore this loss of information, how it affects the model at testing and how to detect it using interval plots. We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for. Our experiments on three large datasets show up to a 2.5% increase (averaged over multiple trials) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum. This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from. This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics.
[]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "Understanding intermediate layers using linear classifier probes", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Hadrien Bertrand", "Mohammad Hashir", "Joseph Paul Cohen" ], "title": "Do Lateral Views Help Automated Chest X-ray Predictions? 2019 In Medical Imaging with Deep Learning Abstract", "venue": null, "year": 2019 }, { "authors": [ "Aurelia Bustos", "Antonio Pertusa", "Jose-Maria Salinas", "Maria de la Iglesia-Vayá" ], "title": "PadChest: A large chest x-ray image dataset with multi-label annotated reports", "venue": "arxiv", "year": 2019 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "J Deng", "W Dong", "R Socher", "L.-J. Li", "K Li", "L Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Theodoros Evgeniou", "Massimiliano Pontil" ], "title": "Regularized multi-task learning", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2004 }, { "authors": [ "Theodoros Evgeniou", "Charles A. Micchelli", "Massimiliano Pontil" ], "title": "Learning Multiple Tasks with Kernel Methods", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "William Finnoff", "Ferdinand Hergert", "Hans Georg Zimmermann" ], "title": "Improving model selection by nonconvergent methods", "venue": "Neural Networks. doi: 10.1016/S0893-6080(05)80122-4", "year": 1993 }, { "authors": [ "Ben Goodrich", "Itamar Arel" ], "title": "Unsupervised neuron selection for mitigating catastrophic forgetting in neural networks", "venue": "IEEE 57th International Midwest Symposium on Circuits and Systems (MWSCAS)", "year": 2014 }, { "authors": [ "Alex Graves" ], "title": "Adaptive Computation Time for Recurrent Neural Networks. 2016 arXiv", "venue": null, "year": 2016 }, { "authors": [ "Michelle Guo", "Albert Haque", "De An Huang", "Serena Yeung", "Li Fei-Fei" ], "title": "Dynamic Task Prioritization for Multitask Learning", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely Connected Convolutional Networks", "venue": "In Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ronald Kemker", "Marc McClure", "Angelina Abitino", "Tyler L Hayes", "Christopher Kanan" ], "title": "Measuring Catastrophic Forgetting in Neural Networks", "venue": "In Thirty-second AAAI Conference on Artificial Intelligence. aaai.org,", "year": 2018 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "In Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Iasonas Kokkinos" ], "title": "UberNet: Training a Universal Convolutional Neural Network for Low-, Mid-, and High-Level Vision Using Diverse Datasets and Limited Memory", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Giwoong Lee", "Eunho Yang", "Sung Ju Hwang" ], "title": "Asymmetric multi-task learning based on task relatedness and loss", "venue": "International Conference of Machine Learning,", "year": 2016 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming Catastrophic Forgetting by Incremental Moment Matching", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem", "venue": "Psychology of Learning and Motivation,", "year": 1989 }, { "authors": [ "Pranav Rajpurkar", "Jeremy Irvin", "Kaylie Zhu", "Brandon Yang", "Hershel Mehta", "Tony Duan", "Daisy Ding", "Aarti Bagul", "Curtis Langlotz", "Katie Shpanskaya", "Matthew P. Lungren", "Andrew Y. Ng" ], "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "venue": "arxiv", "year": 2017 }, { "authors": [ "R Ratcliff" ], "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions", "venue": null, "year": 1990 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic Forgetting, Rehearsal and Pseudorehearsal", "venue": "Connection science", "year": 1995 }, { "authors": [ "Sebastian Ruder" ], "title": "An Overview of Multi-Task Learning in Deep Neural Networks", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn’t able to reflect if all the tasks are at their validation optimum even if the summarized curve might be. In this work, we explore this loss of information, how it affects the model at testing and how to detect it using interval plots. We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for. Our experiments on three large datasets show up to a 2.5% increase (averaged over multiple trials) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum. This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from. This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics." }, { "heading": "1 INTRODUCTION", "text": "A validation set, separate from the test set, is the de facto standard for training deep learning models through early stopping. This non-convergent approach (Finnoff et al., 1993) identifies the best model in multi-task/label settings based on an expected error across all tasks. Calculating metrics on the validation set can estimate the model’s generalization capability at every stage of training and monitoring the summarized validation curve over time aids the detection of overfitting. It is common to see the use of validation metrics as a way to stop training and/or load the best model for testing, as opposed to training a model to N epochs and then testing. While current works have always cautioned about the representativeness of validation data being used, the curves themselves haven’t been addressed much. In particular, there hasn’t been much attention on the summarized nature of the curves and their ability to represent the generalization of the constituent tasks.\nTasks can vary in difficulty and even have a dependence on each other (Graves, 2016; Alain & Bengio, 2016). An example by Lee et al. (2016) is to suppose some task a is to predict whether a visual instance ‘has wheels’ or not, and task b is to predict if a given visual object ‘is fast’; not only is one easier, but there is also a dependence between them. So there is a possibility that easier tasks reach their best validation metric before the rest and may start overfitting if training were to be continued. This isn’t reflected very clearly with the use of a validation metric that is averaged over all tasks. As a larger number of underfit tasks would skew the average, the overall optimal validation point gets shifted to a later time-step (epoch) when the model could be worse at the easier tasks. Vice versa, the optimal epoch gets shifted earlier due to a larger, easier subset that are overfit when the harder tasks reach their individual optimal epochs. We term this mismatch in the overall and task optimal epochs as a ‘temporal discrepancy’.\nIn this work, we explore and try to mitigate this discrepancy between tasks. We present in this paper that early stopping on only the expected error over tasks leaves us blind to the performance they are sacrificing per task. The work is organized in the following manner: in §2, we explore existing work that deals with methods for incorporating task difficulty (which could be causing this discrepancy) into training. The rest of the sections along with our contributions can be summarized as:\n1. We present a method to easily visualize and detect the discrepancy through interval plots in §3\n2. We formulate techniques that could quantify this discrepancy by also considering the pertask validation metrics in model selection in §4.\n3. We explore the presence of the temporal discrepancy on three image datasets and test the aforementioned techniques to assess the change in performance in §5\n4. To the best of our knowledge, there has not been a study like this into the potential of per-task validation metrics to select an ensemble of models." }, { "heading": "2 RELATED WORK", "text": "Training multiple related tasks together creates a shared representation that can generalize better on individual tasks. The rising prominence of multi-task learning can be attributed to Caruana (1997). It has been acknowledged that some tasks are easier to learn than the others and plenty of works have tried to solve this issue through approaches that slow down the training of easier tasks. In other words, tasks are assigned a priority in the learning phase based on their difficulty determined through some metric. This assignment of priority implicitly tries to solve the temporal discrepancy without formally addressing its presence. Task prioritization can take the form of gradient magnitudes, parameter count, or update frequencies (Guo et al., 2018). We can group existing solutions into task prioritization as a hyperparameter or task prioritization during training (aka self-paced learning). The post-training brute force and clustering methods we propose do not fit into these categories as we believe they have not been done before. Instead of adjusting training or retraining, these methods operate on a model which has already been trained.\nTask prioritization as a hyperparameter is a way to handle per task overfitting that is almost the subconscious approach for most practitioners. This would include data-augmentation and over/undersampling. An example case is in Kokkinos (2017) where they use manually tuned task weights in order to improve performance.\nTask prioritization during training covers approaches where tasks dynamically change priority or are regularized in some way. For example Guo et al. (2018) takes an approach to change task weights during training based on multiple metrics such as error, perceived difficulty, and learnable parameters. The idea is that some tasks need to have a high weight at the start and a low weight later in training. In a similar direction Gradnorm (Chen et al., 2018) aims to set balance task weights based on normalizing the gradients across tasks.\nUsing relationships between tasks during training is another direction. Ruder (2017) discussed negative transfer where sharing information with unrelated tasks might actually hurt performance. Work by Lee et al. (2016) incorporated a directed graph of relationships between tasks in order to enforce sharing between related tasks when reweighting tasks. Task clustering has been performed outside of neural networks by Evgeniou et al. (2005); Evgeniou & Pontil (2004) where they regularize per-task SVMs so that the parameters between related tasks are similar.\nIt would be natural to use some of these methods as a baseline for our work. However, we think it would not be an equitable comparison as:\n• These baseline methods are applied during training whereas ours is a post-training analysis. • The main aspect of our analysis is only on the validation metric whereas these baselines\nconsider a variety of different aspects of training.\n• The focus of our work is on how the weights change with time, keeping all else constant, and how these changes affect the validation and test performance. The aforementioned methods modify the gradients w.r.t. several factors during the training which adds more degrees of freedom and is difficult to compare.\nRegardless of task difficulty, training multiple tasks jointly with a neural network can lead to catastrophic forgetting: it refers to how a network can lose information that it had learned for a particular task as it learns another task (McCloskey & Cohen, 1989). Multiple works have explored and tried to mitigate this phenomenon (Ratcliff, 1990; Robins, 1995; Goodrich & Arel, 2014; Kirkpatrick\net al., 2017; Kemker et al., 2018; Lee et al., 2017) and it still remains an open area of research. It is highly likely that catastrophic forgetting could be causing any such temporal discrepancy; exploring the relationship between the two is an area is a very interesting direction in research and is left for future work." }, { "heading": "3 STUDYING TEMPORAL DISCREPANCY BETWEEN TASKS", "text": "Firstly, we define what a task is to disambiguate from its general usage in multi-task learning literature. A ‘task’ is predicting a single output unit out of many, regardless of the training paradigm being multi-class or multi-label or other. Tasks can be very fine-grained such as predicting the class of an image or much higher-level such image classification, segmentation etc. While our work uses the term in the former context, our motivation and findings can be applied in the latter context (which is the broader and more common context in multi-task learning) as well.\nIn the next two subsections, we define the term temporal discrepancy and display an example of it on CIFAR100. Then, we introduce a simple method of visualizing it on datasets with a large number of tasks that would make it difficult to analyze the per-task curves together." }, { "heading": "3.1 TEMPORAL DISCREPANCY", "text": "A temporal discrepancy in the validation performance refers to the phenomenon where the model isn’t optimal for all of its tasks simultaneously. This occurs when there is a difference between the overall optimal epoch determined by the summarized validation metric and the epoch in which task achieves its best validation metric is higher than some arbitrary threshold, i.e., |ts − ti| > δ where ts is the optimal epoch of the summarized validation curve and ti is the optimal epoch for task i.\nFigure 1 displays an example of this discrepancy in CIFAR100 (only five curves plotted for clarity). It is most evident for the labels Sea and Lamp which undergo a drop of 7.5% and 5.7% respectively in their validation accuracy from their peak epoch to ts. Similarly, Snake also starts degrading till ts but strangely starts improving after. Conversely, Rose and Streetcar are underfit at ts as they continue to improve after.\nThe most noteworthy observation is that the averaged validation curve (in dotted black) completely plateaus out after the 150th epoch. There is significant variation occurring in the per-label curves but the averaged curve is unable to represent these dynamics in the training. Selecting an optimal model off the averaged curve can be quite misleading as it represents the entire [151, 300] interval as optimal despite the labels’ validation accuracies fluctuating significantly in this interval. The test performance of individual labels can wildly differ depending on which epoch is used for loading the weights for testing and/or deployment." }, { "heading": "3.2 INTERVAL PLOTS", "text": "It is easy to examine the per-label curves in Figure 1 as only 5% of the labels have been plotted. But when the number of tasks is high and all of them need to be plotted together to get a clearer global picture, decomposing the summarized validation curve can get very messy. Quasi-optimal validation interval plots, or interval plots for short, are a way of assessing the optimal temporality of the per-task validation performance relative to ts. It is a simple visualization method that aids in determining when and/or for how long the tasks are within the acceptable limits of the best validation performance and also which and/or how many tasks aren’t within these limits near the overall optimal epoch ts.\nCreating an interval plot involves finding a ‘quasi-optimal’ region for each task, i.e., a consecutive temporal interval in which a validation metric of the task fluctuates near its maximum with a set tolerance. The task validation curves are first smoothed out to reduce noise and the time-step (epoch) at which the task achieved its optimal validation metric is determined. Then, the number of epochs before and after this task-optimal epoch in which the task metric is greater than a threshold is calculated. This duration of epochs is the interval for the task.\nGiven a vector of validation metrics Ai for a task i, its interval τi is given by: τi = [ti −m, . . . ti − 1, ti, . . . ti + n] ∀ aij ≥ aiti − where ti = argmax Ai, j ∈ τi and aij ∈ Ai Figure 2 plots the decomposed curves and the equivalent intervals for CIFAR100. The overall optimal epoch ts doesn’t fall in the intervals of almost half the labels; these labels aren’t at their potentially best validation performance at the early stopping point. Some intervals are notably small in duration, meaning those labels have a very sharp peak. This could imply that the validation performance is randomly high at that epoch and it’d be more suitable to shift the quasi-optimal region of these labels to a longer and/or later interval, that doesn’t necessarily contain ti, as long as the validation accuracy stays within the tolerance in that interval." }, { "heading": "4 QUANTIFYING PERFORMANCE LOST DUE TO TEMPORAL DISCREPANCY", "text": "In this section, we present two simple techniques that consider the per-task validation metrics for selecting the best model for testing. Two aspects common to these techniques is their hand-crafted\n& engineered nature and their inefficiency in terms of deployment, training time and/or inference time. The aim with these techniques to assess how much potential gain in performance could be attained if we account for per-task validation metrics in selecting a model and we’d like to stress that these techniques serve as a proof-of-concept of this gain (if any). They are intended as a baseline and also a stimulus for increasing research into the effect of the subtleties of the validation curves on model performance and selection.\nBrute force This involves loading the model with the weights from a given task i’s optimal validation epoch ti and evaluating on only the samples that belong to that task. We call this particular model that has been loaded with the weights from ti as the validation-optimal model for task i. It is essentially using a “separate model” for each task during evaluation and/or deployment making this the most naive approach. It is also the most inefficient because it would (i) require storing up to N models, where N is the total number of tasks (ii) require a way to combine predictions from all N models that wouldn’t be misleading during inference (iii) increase latency significantly due to overhead caused by loading and reloading the model weights (iv) scale up the inference time by a factor up to N.\nClustering Instead of having separate weights for each task, we try to cluster the set of the taskoptimal validation epochs into K clusters so that only K models are required as opposed to N. In this approach, the interval plots can also be utilized to cluster tasks that have similar interval positions and/or lengths in addition to the ti’s. Similar to brute force, this technique also involves multiple models loaded with weights from different epochs, but trades off any gain in performance for a lower number of models." }, { "heading": "5 EXPERIMENTS", "text": "We train variations of DenseNets (Huang et al., 2017) on three image datasets: CIFAR100, Tiny ImageNet and PadChest. All models were trained with three random seeds for model initialization and splits of the training set into training and validation sets.\nCIFAR100 CIFAR100 (Krizhevsky & Hinton, 2009) is a dataset of natural images containing 100 classes with 500 training images and 100 testing images per class. We trained a “DenseNet-BC (k = 12)” as described in Huang et al. (2017): it has three dense blocks, a total of 100 layers and a growth rate of 12. It was trained in the exact manner as the original work, i.e. 300 epochs with dropout, weight decay of 10−4, SGD with Nesterov momentum of 0.9 and an initial learning rate of 0.1 that is divided by 10 at the 150th and 225th epochs. As we had carved out 20% of the training set as validation and didn’t use data augmentation, we achieved an average test accuracy of 72.14%. In our analysis, we only use the validation curves after the 150th epoch because the training is very noisy and brittle up to that epoch due to the use of a learning rate of 0.1 (Figures 1 and 2).\nTiny ImageNet ImageNet (Deng et al., 2009) is a dataset with 1.5 million images and 1000 classes. Tiny ImageNet1 is a subset of ImageNet with images resized to 64x64 and only 200 classes. We utilized the same architecture as CIFAR100 but with a total of 190 layers and a growth rate of 40. In addition, we used a stride of 2 for the first convolution. This modified DenseNet was also trained in the same manner as above but with the hyperparameters used for ImageNet in the original work: 100 epochs, no dropout and dividing the learning rate at the 30th and 60th epochs instead. Rest of the hyperparameters remained the same. 10% of the training set was used as validation and we achieved an average test accuracy of 63.09%. Similar to CIFAR100, we only use the validation curves after the 30th epoch. The interval plot from one run of training is given in Figure 3a\nPadChest PadChest (Bustos et al., 2019) is a medical imaging dataset of 160,000 chest X-rays of 67,000 patients with multiple visits and views available. We used the publicly available code provided by Bertrand et al. (2019) to recreate their cohort of around 31,000 samples. We trained a multilabel DenseNet-121 (Huang et al., 2017; Rajpurkar et al., 2017) on the frontal views and only those labels which have more than 50 samples (total 64 labels). With a 60-20-20 split between training, validation, and test sets, we trained for 100 epochs with Adam and an initial learning rate of\n1https://tiny-imagenet.herokuapp.com/\n0.0001 that is halved every 20 epochs. The interval plot from one run of training is given in Figure 3b" }, { "heading": "5.1 BRUTE FORCE", "text": "By brute forcing the best model selection, we wanted to assess how much performance is lost when the summarized validation curve is used. This involves evaluating each task with its own set of optimal weights determined from its specific optimal epoch. This would require N models in theory but the number of validation-optimal models is actually lower as many tasks have inter-dependent learning profiles. These correlated tasks may reach their optimal validation performance at the same epoch, thus requiring only a single common set of weights for all of them. N also decreases with the total number of training epochs as that increases the probability of tasks having the same optimal epoch. On analyzing the task validation curves for the three datasets, we do find that the number of models required is much lower than N. CIFAR100 and Tiny ImageNet both required less than 60 models for brute forcing the evaluation, despite the latter having twice as many labels. Also since Tiny ImageNet was trained for one-third of the epochs, the number of models reduced drastically. The number of models for PadChest was around 35.\nThe results of using a single model for each task are tabulated for the three datasets in Table 1. On using each label’s validation-optimal model for evaluating on the test, the test metric always increases in comparison to using the baseline model that uses the summarized validation curve for CIFAR100 and Tiny ImageNet. The top-1 accuracy for CIFAR100 undergoes an average and maximum increase of approximately 2.5% & 3.2% respectively. For Tiny ImageNet, an average and maximum increase of 1.72% & 1.95% is observed.\nHowever, the average increase in PadChest is not only meagre but also within the standard deviation. This could be because the temporal discrepancy isn’t very pronounced in PadChest: the overall\noptimal epoch does happen to be in the intervals of a very large portion of the labels (Figure 3b). As PadChest has fewer number of outputs compared to the other two (64 vs. 100/200), there could also be a relationship between the number of outputs and the magnitude of the discrepancy’s effect. These results show that accounting for the per-task validation metrics can mitigate a significant temporal discrepancy and increase the testing performance." }, { "heading": "5.2 COMPUTATION-PERFORMANCE TRADEOFF", "text": "We can further reduce the number of validation-optimal models required by merging the ti’s that are close to each other. We use K-means to cluster the set of task-optimal epochs into K clusters. We vary K from 2 to N to gain an insight into how much performance is lost as we reduce the number of validation-optimal models required down to one where using only one model is the same as using the summarized. For any given cluster k, we use its center as the epoch for loading the weights to evaluate the set of the tasks in k. We round off the center to the nearest integer for our analysis but the fractional part can be used to select weights after a specific batch iteration in that epoch.\nThe results for the three datasets are plotted in Figure 4. There is an interesting observation that the accuracy doesn’t always increase with the number of task-optimal models which is very noticeable in Tiny ImageNet Seed 2. This could be due to the noisy nature of training with SGD where the validation metric can oscillate a lot between epochs and even a shift of one epoch can cause a decrease in performance." }, { "heading": "6 CONCLUSION", "text": "In this work, we examine the decomposition of a model’s average validation curve into its per-task curves to assess the presence of a temporal discrepancy on three image datasets. We provide a visualization method to detect if there is a disparity between the task curves and the summarized validation curve. We test two techniques that incorporate the per-task metrics into model evaluation and we find that that when per-task validation metrics are accounted for training runs that show a significant temporal discrepancy, we gain an increase in testing performance. We show that with CIFAR100 there is room for up to 2.5% to be gained in testing accuracy and 1.72% for Tiny ImageNet if we don’t use the averaged metrics as they are.\nWith these experiments, we aim to create more awareness of how summarized validation metrics cannot represent a model that is truly optimal for all its tasks all the time. Using averaged curves could mean that models are currently being trained oblivious of the performance being sacrificed on individual tasks. We wish to draw attention to the need of both theoretical and engineered approaches that would take the per-task validation metrics into account while training. Our experiments demonstrate that there is a potential that current state-of-the-art models could possibly be made even more optimal by ensuring all of its prediction tasks are at their validation optimums." } ]
2,019
null
SP:67c44f33dff59e4d218f753fdbc6296da62cdf62
[ "This paper compares SGD and SVRG (as a representative variance reduced method) to explore tradeoffs. Although the computational complexity vs overall convergence performance tradeoff is well-known at this point, an interesting new perspective is the comparison in regions of interpolation (where SGD gradient variance will diminish on its own) and label noise (which propogates more seriously in SGD vs SVRG). The analysis is done on a simple linear model with regression, with some experiments on simulations, MNIST, and CIFAR.", "This paper examines the tradeoffs between applying SVRG and SGD for training neural networks by providing an analysis of noisy least squares regression problems as well as experiments on simple MLPs and CNNs on MNIST and CIFAR-10. The theory analyzes a linear model where both the input $x$ and label noise $\\epsilon$ follow Gaussian distributions. Under these assumptions, the paper shows that SVRG is able to converge to a smaller neighborhood at a slower rate than SGD, which converges faster to a larger neighborhood. This analysis coincides with the experimental behavior applied to neural networks, where one observes when training underparameterized models that SGD significantly outperforms SVRG initially, but SVRG is able to attain a lower loss value asymptotically. In the overparameterized regime, SGD is demonstrated to always outperform SVRG experimentally, which is argued to coincide with the case where there is no label noise in the theory." ]
Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve largescale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after the first few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime.
[ { "affiliations": [], "name": "A NON-ASYMPTOTIC" } ]
[ { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha: The first direct acceleration of stochastic gradient methods", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine learning and the bias-variance trade-off", "venue": null, "year": 2018 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010,", "year": 2010 }, { "authors": [ "Xi Chen", "Jason D. Lee", "Xin T. Tong", "Yichen Zhang" ], "title": "Statistical inference for model parameters in stochastic gradient descent, 2016", "venue": null, "year": 2016 }, { "authors": [ "Lénaı̈c Chizat", "Francis Bach" ], "title": "A note on lazy training in supervised differentiable programming", "venue": null, "year": 2018 }, { "authors": [ "Aaron Defazio", "Léon Bottou" ], "title": "On the ineffectiveness of variance reduced optimization for deep learning", "venue": null, "year": 2018 }, { "authors": [ "Aaron Defazio", "Francis R. Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Nidham Gazagnadou", "Robert M. Gower", "Joseph Salmon" ], "title": "Optimal mini-batch and step sizes for SAGA", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Rong Ge", "Sham M Kakade", "Rahul Kidambi", "Praneeth Netrapalli" ], "title": "The step decay schedule: A near optimal, geometrically decaying learning rate procedure", "venue": null, "year": 1904 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in highdimensional ridgeless least squares interpolation", "venue": null, "year": 1903 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Arthur Jacot", "Clément Hongler", "Franck Gabriel" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Dmitry Kovalev", "Samuel Horváth", "Peter Richtárik" ], "title": "Don’t jump through hoops and remove those loops: Svrg and katyusha are better without the outer loop", "venue": null, "year": 1901 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Leon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel S Schoenholz", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": null, "year": 1902 }, { "authors": [ "Tianyang Li", "Liu Liu", "Anastasios Kyrillidis", "Constantine Caramanis" ], "title": "Statistical inference using sgd, 2017", "venue": null, "year": 2017 }, { "authors": [ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Philipp Moritz", "Robert Nishihara", "Michael Jordan" ], "title": "A linearly-convergent stochastic l-bfgs algorithm", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "K.B. Petersen", "M.S. Pedersen" ], "title": "The matrix cookbook, nov", "venue": "URL http://localhost/ pubdb/p.php?3274. Version", "year": 2012 }, { "authors": [ "Anant Raj", "Sebastian U. Stich" ], "title": "k-svrg: Variance reduction for large scale optimization, 2018", "venue": null, "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Nicolas Le Roux", "Mark W. Schmidt", "Francis R. Bach" ], "title": "A stochastic gradient method with an exponential convergence rate for finite training sets", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Tom Schaul", "Sixin Zhang", "Yann LeCun" ], "title": "No more pesky learning rates", "venue": "JMLR Workshop and Conference Proceedings,", "year": 2013 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux" ], "title": "Fast convergence of stochastic gradient descent under a strong growth condition", "venue": "arXiv preprint arXiv:1308.6370,", "year": 2013 }, { "authors": [ "Othmane Sebbouh", "Nidham Gazagnadou", "Samy Jelassi", "Francis Bach", "Robert M Gower" ], "title": "Towards closing the gap between the theory and practice of svrg", "venue": null, "year": 1908 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "arXiv preprint arXiv:1711.00489,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Panos Toulis", "Edoardo M. Airoldi" ], "title": "Asymptotic and finite-sample properties of estimators based on stochastic gradients", "venue": "Ann. Statist., 45(4):1694–1727,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sharan Vaswani", "Francis Bach", "Mark Schmidt" ], "title": "Fast and faster convergence of SGD for overparameterized models and an accelerated perceptron", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Chong Wang", "Xi Chen", "Alexander J Smola", "Eric P Xing" ], "title": "Variance reduction for stochastic gradient optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Yuhuai Wu", "Mengye Ren", "Renjie Liao", "Roger B. Grosse" ], "title": "Understanding short-horizon bias in stochastic meta-optimization", "venue": "In ICLR (Poster). OpenReview.net,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR. OpenReview.net,", "year": 2017 }, { "authors": [ "Guodong Zhang", "Lala Li", "Zachary Nado", "James Martens", "Sushant Sachdeva", "George E Dahl", "Christopher J Shallue", "Roger Grosse" ], "title": "Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model", "venue": "arXiv preprint arXiv:1907.04164,", "year": 2019 }, { "authors": [ "Tong Zhang" ], "title": "Solving large scale linear prediction problems using stochastic gradient descent algorithms", "venue": "In Proceedings of the Twenty-first International Conference on Machine Learning,", "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many large-scale machine learning problems, especially in deep learning, are formulated as minimizing the sum of loss functions on millions of training examples (Krizhevsky et al., 2012; Devlin et al., 2018). Computing exact gradient over the entire training set is intractable for these problems. Instead of using full batch gradients, the variants of stochastic gradient descent (SGD) (Robbins & Monro, 1951; Zhang, 2004; Bottou, 2010; Sutskever et al., 2013; Duchi et al., 2011; Kingma & Ba, 2014) evaluate noisy gradient estimates from small mini-batches of randomly sampled training points at each iteration. The mini-batch size is often independent of the training set size, which allows SGD to immediately adapt the model parameters before going through the entire training set. Despite its simplicity, SGD works very well, even in the non-convex non-smooth deep learning problems (He et al., 2016; Vaswani et al., 2017). However, the optimization performance of the stochastic algorithm near local optima is significantly limited by the mini-batch sampling noise, controlled by the learning rate and the mini-batch size. The sampling variance and the slow convergence of SGD have been studied extensively in the past (Chen et al., 2016; Li et al., 2017; Toulis & Airoldi, 2017). To ensure convergence, machine learning practitioners have to either increase the mini-batch size or decrease the learning rate toward the end of the training (Smith et al., 2017; Ge et al., 2019).\nRecently, several clever variance reduction methods (Roux et al., 2012; Defazio et al., 2014; Wang et al., 2013; Johnson & Zhang, 2013) were proposed to alleviate the noisy gradient problem by using control-variates to achieve unbiased and lower-variance gradient estimators. In particular, the variants of Stochastic Variance Reduced Gradient (SVRG) (Johnson & Zhang, 2013), k-SVRG (Raj & Stich, 2018), L-SVRG (Kovalev et al., 2019) and Free-SVRG (Sebbouh et al., 2019) construct control-variates from previous staled snapshot model parameters. These methods enjoy a superior asymptotic performance in convex optimization compared to the standard SGD. The control-variate techniques are shown to improve the convergence rate of SGD from a sub-linear to a linear convergence rate. These variance reduction methods can also be combined with momentum (Allen-Zhu, 2017) and preconditioning methods (Moritz et al., 2016) to obtain faster convergence. Despite\ntheir strong theoretical guarantees, SVRG-like algorithms have seen limited success in training deep learning models (Defazio & Bottou, 2018). Traditional results from stochastic optimization focus on the asymptotic analysis, but in practice, most of deep neural networks are only trained for hundreds of epochs due to the high computational cost. To address the gap between the asymptotic benefit of SVRG and the practical computational budget of training deep learning models, we provide a non-asymptotic study on the SVRG algorithms under a noisy least squares regression model. Although optimizing least squares regression is a basic problem, it has been shown to characterize the learning dynamics of many realistic deep learning models (Zhang et al., 2019; Lee et al., 2019). Recent works suggest that neural network learning behaves very differently in the underparameterized regime vs the overparameterized regime (Ma et al., 2018; Vaswani et al., 2019), characterized by whether the learnt model can achieve zero expected loss. We account for both training regimes in the analysis by assuming a linear target function and noisy labels. In the presence of label noise, the loss is lower bounded by the label variance. In the absence of the noise, the linear predictor can fit each training example perfectly. We summarize the main contributions as follows:\n• We show the exact expected loss of SVRG and SGD along an optimization trajectory as a function of iterations and computational cost.\n• Our non-asymptotic analysis provides an insightful comparison of SGD and SVRG by considering their computational cost and learning rate schedule. We discuss the trade-offs between the total computational cost, i.e. the total number of back-propagations performed, and convergence performance.\n• We consider two different training regimes with and without label noise. Under noisy labels, the analysis suggests SGD only outperforms SVRG under a mild total computational cost. However, SGD always exhibits a faster convergence compared to SVRG when there is no label noise.\n• Numerical experiments validate our theoretical predictions on both MNIST and CIFAR10 using various neural network architectures. In particular, we found the comparison of the convergence speed of SGD to that of SVRG in underparameterized neural networks closely matches with our noisy least squares model prediction. Whereas, the effect of overparameterization is captured by the regression model without label noise." }, { "heading": "1.1 RELATED WORKS", "text": "Stochastic variance reduction methods consider minimizing a finite-sum of a collection of functions using SGD. In case we use SGD to minimize these objective functions, the stochasticity comes from the randomness in sampling a function in each optimization step. Due to the induced noise, SGD can only converge using decaying step sizes with sub-linear convergence rate. Methods such as SAG (Roux et al., 2012), SVRG (Johnson & Zhang, 2013), and SAGA (Defazio et al., 2014), are able to recover linear convergence rate of full-batch gradient descent with the asymptotic cost comparable to SGD. SAG and SAGA achieve this improvement at the substantial cost of storing the most recent gradient of each individual function. In contrast, SVRG spends extra computation at snapshot intervals by evaluating the full-batch gradient. Theoretical results such as Gazagnadou et al. (2019) show that under certain smoothness conditions, we can use larger step sizes with stochastic variance reduction methods than is allowed for SGD and hence achieve even faster convergence. In situations where we know the smoothness constant of functions, there are results on the optimal mini-batch size and the optimal step size given the inner loop size (Sebbouh et al., 2019). Applying variance\nreduction methods in deep learning has been studied recently (Defazio & Bottou, 2018). The authors conjectured the ineffectiveness is caused by various elements commonly used in deep learning such as data augmentation, batch normalization and dropout. Such elements can potentially decrease the smoothness and make the stored gradients become stale quickly. The proposed solution is to either remove these elements or update the gradients more frequently than is practical.\nDynamics of SGD and quadratic models Our main analysis tool is very closely related to recent work studying the dynamics of gradient-based stochastic methods. Wu et al. (2018) derived the dynamics of stochastic gradient descent with momentum on a noisy quadratic model (Schaul et al., 2013), showing the problem of short horizon bias. In (Zhang et al., 2019), the authors showed the same noisy quadratic model captures many of the essential characteristic of realistic neural networks training. Their noisy quadratic model successfully predicts the effectiveness of momentum, preconditioning and learning rate choices in training ResNets and Transformers. However, these previous quadratic models assume a constant variance in the gradient that is independent of the current parameters and the loss function. It makes them inadequate for analyzing the stochastic variance reduction methods, as SVRG can trivially achieve zero variance under the constant gradient noise. Instead, we adopted a noisy least-squares regression formulation by considering both the mini-batch sampling noise and the label noise. There are also recent works that derived the risk of SGD, for least-squares regression models using the bias-variance decomposition of the risk (Belkin et al., 2018; Hastie et al., 2019). We use a similar decomposition in our analysis. In contrast to the asymptotic analysis in these works, we compare SGD to SVRG along the optimization trajectory for any finite-time horizon under limited computation cost, not just the convergence points of those algorithms.\nUnderparameterization vs overparameterization. Many of the state-of-the-art deep learning models are overparameterized deep neural networks with more parameters than the number of training examples. Even though these models are able to overfit to the data, when trained using SGD, they generalize well (Zhang et al., 2017). As suggested in recent work, underparameterized and overparameterized regimes have different behaviours (Ma et al., 2018; Vaswani et al., 2019; Schmidt & Roux, 2013). Given the infinite width and a proper weight initialization, the learning dynamics of a neural network can be well-approximated by a linear model via the neural tangent kernel (NTK) (Jacot et al., 2018; Chizat & Bach, 2018). In NTK regime, neural networks are known to achieve global convergence by memorizing every training example. On the other hand, previous convergence results for SVRG have been obtained in stochastic convex optimization problems that are similar to that of an underparameterized model (Roux et al., 2012; Johnson & Zhang, 2013). Our proposed noisy least-squares regression analysis captures both the underparameterization and overparameterization behavior by considering the presence or the absence of the label noise." }, { "heading": "2 PRELIMINARY", "text": "" }, { "heading": "2.1 NOTATIONS", "text": "We will primarily focus on comparing the minibatch version of two methods, SGD and SVRG (Johnson & Zhang, 2013). Denote Li as the loss on ith data point. The SGD update is written as,\nθ(t+1) = θ(t) − α(t)ĝ(t), (1)\nwhere ĝ(t) = 1b ∑b i ∇θ(t)Li is the minibatch gradient, t is the training iteration, and α(t) is the learning rate. The SVRG algorithm is an inner-outer loop algorithm proposed to reduce the variance of the gradient caused by the minibatch sampling. In the outer loop, for every T steps, we evaluate a large batch gradient ḡ = 1N ∑N i ∇θ(mT )Li, where N b, and m is the outer loop index, and we store the parameters θ(mT ). In the inner loop, the update rule of the parameters is given by,\nθ(mT+t+1) = θ(mT+t) − α(t) ( ĝ(mT+t) − g̃(mT+t) + ḡ ) (2)\nwhere ĝ(mT+t) = 1b ∑b i ∇θ(mT+t)Li is the current gradient of the mini-batch and g̃(mT+t) = 1 b ∑b i ∇θ(mT )Li is the old gradient. Note that in our analysis, the reference point is chosen to be the last iterate of previous outer loop θ(mT ), recommended as a practical implementation of the algorithm by the original SVRG paper Johnson & Zhang (2013)." }, { "heading": "2.2 THE NOISY LEAST SQUARES REGRESSION MODEL", "text": "We now define the noisy least squares regression model (Schaul et al., 2013; Wu et al., 2018). In this setting, the input data is d-dimensional, and the output label is generated by a linear teacher model with additive noise, (xi, i) ∼ Px × P ; yi = x>i θ∗ + i, where E[xi] = µ ∈ Rd and Cov(xi) = Σ, E[ i] = 0, Var( i) = σ2y . We assume WLOG θ∗ = 0. We also assume the data covariance matrix Σ is diagonal. This is an assumption adopted in many previous analysis and it is also a practical assumption as we often apply whitening to pre-process the training data. We would like to train a student model θ that minimizes the squared loss over the data distribution:\nmin θ L(θ) := E\n[ 1\n2 (x>i θ − yi)2\n] . (3)\nAt each iteration, the optimizer can query an arbitrary number of data points {xi, yi}i sampled from data distribution. The SGD method uses b data points to form a minibatch gradient:\nĝ(t) = 1\nb b∑ i (xix > i θ (t) − xi i) = XbX>b θ(t) − 1√ b Xb b, (4)\nwhere Xb = 1√b [x1;x2; · · · ;xb] ∈ R d×b, and the noise vector b = [ 1; 2; · · · ; b]> ∈ Rb. SVRG on the other hand, queries for N data points every T steps to form a large batch gradient ḡ = XNX > Nθ (mT ) − 1√ N XN N , where XN and N are defined similarly. At each inner loop step, it further queries for another b data points, to form the update in Eq. 2.\nLastly, note that the expected loss can be written as a function of the second moment of the iterate,\nL(θ(t)) = 1 2 E [( x>i θ (t) − i )2] = 1 2 ( tr(ΣE[θ(t)θ(t) > ]) + σ2y ) .\nHence for the following analysis we mainly focus on deriving the dynamics of the second moment E[θ(t)θ(t)>], denoted as A(θ(t)). When Σ is diagonal, the loss can further be reduced to 1 2 diag(Σ) >diag(E[θ(t)θ(t)>]) + 12σ 2 y . We denote diag(E[θ(t)θ(t) > ]) by m(θ(t))." }, { "heading": "2.3 THE DYNAMICS OF SGD", "text": "Definition 1 (Formula for dynamics). We define the following functions and identities,\nM(θ) = E[θθ>], m(θ) = diag(E[θθ>]), C(M(θ)) = Ex[xx>M(θ)xx>]− ΣM(θ)Σ,\nn = α2σ2ydiag(Σ), R = (I− αΣ)2 + α2\nb (Σ2 + diag(Σ)diag(Σ)>),\nQ = 2α2 b (Σ2 + diag(Σ)diag(Σ)>), P = I− αΣ, F = α 2(N + b) Nb (Σ2 + diag(Σ)diag(Σ)>)\nG = α2( b+ 1\nb Σ2 +\n1 b diag(Σ)diag(Σ)>).\nThe SGD update (Eq. 1) with the mini-batch gradient of of the noisy least squares model (Eq. 4) is,\nθ(t+1) = (I− αXbX>b )θ(t) + α√ b Xb b.\nWe substitute the update rule to derive the following dynamics for the second moment of the iterate:\nM(θ(t+1)) = (I− αΣ)M(θ(t))(I− αΣ)︸ ︷︷ ︸ 1©: gradient descent shrinkage + α2 b C(M(θ(t)))︸ ︷︷ ︸\n2©: input noise\n+ α2σ2y b\nΣ︸ ︷︷ ︸ 3©: label noise\n(5)\nThis dynamics equation can be understood intuitively as follows. The term 1© leads to an exponential shrinkage of the loss due to the gradient descent update. Since we are using a noisy gradient, the second term 2© represents the variance of stochastic gradient caused by the random input Xb. The\nterm 3© comes from the label noise. We show in the next theorem that when the second moment of the iterate approaches zero, 2© will also approach zero. However due to the presence of the label noise, the expected loss is lower bounded by 3©. When Σ is diagonal, we further analyze and decompose C(M(θ)) as a function of m(θ) so as to derive the following dynamics and decay rate for SGD. Theorem 2 (SGD Dynamics and Decay Rate). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0,Σ) with Σ diagonal and θ∗ = 0, we can express C(θ) as a function of m(θ):\ndiag ( C ( M(θ) )) = ( Σ2 + diag(Σ)diag(Σ)> ) m(θ) (6)\nThen we derive following dynamics of expected second moment of θ: m(θ(t)) = Rt ( m(θ(0))− (I−R) −1n\nb\n) +\n(I−R)−1n b , (7)\nUnder the update rule of SGD, R is the decay rate of the second moment of parameters between two iterations. And based on Theorem 2 the expected loss can be calculated by 12 diag(Σ) >m(θ(t))+ 12σ 2 y ." }, { "heading": "3 A DILEMMA FOR SVRG", "text": "By querying a large batch of datapoints XN every T steps, and a small minibatch Xb at every step, the SVRG method forms the following update rule:\nθ(mT+t+1) = ( I− αXbX>b ) θ(mT+t) + α ( XbX > b −XNX>N ) θ(mT ) +\nα√ N XN N (8)\nTo derive the dynamics of the second moment of the parameters following the SVRG update, we look at the dynamics of one round of inner loop updates, i.e., from θ(mT ) to θ((m+1)T ): Lemma 3. The dynamics of the second moment of the iterate following SVRG update rule is given by,\nM(θ(mT+t+1)) = (I− αΣ)M(θ(mT+t))(I− αΣ)︸ ︷︷ ︸ 1© gradient descent shrinkage + α2\nb C ( M(θ(mT+t)) )\n︸ ︷︷ ︸ 2© input noise\n+ α2σ2y N\nΣ︸ ︷︷ ︸ 3© label noise\n(9)\n+ α2 N + b Nb C ( M(θ(mT )) )\n︸ ︷︷ ︸ 4©variance due to g̃(mT+t)\n−α 2\nb\n( C ( M(θ(mT ))P t ) + C ( P tM(θ(mT )) )) ︸ ︷︷ ︸\n5© Variance reduction from control variate\nThe dynamics equation above is very illuminating as it explicitly manifests the weakness of SVRG. First notice that terms 1©, 2©, 3© reappear, contributed by the SGD update. The additional terms, 4© and 5©, are due to the control variate. Observe that the variance reduction term 5© decays exponentially throughout the inner loop, with decay rate I − αΣ, i.e. P . We immediately notice that this is the same term that governs the decay rate of the term 1©, and hence resulting in a conflict between the two. Specifically, if we want to reduce the term 1© as fast as possible, we would prefer a small decay rate and a large learning rate, i.e. α → 1λmax(Σ) . But this will also make the boosts provided by the control variate diminish rapidly, leading to a poor variance reduction. The term 4© makes things even worse as it will maintain as a constant throughout the inner loop, contributing to an extra variance on top of the variance from standard SGD. On the other hand, if one chooses a small learning rate for the variance reduction to take effect, this inevitably will make the decay rate for term 1© smaller, resulting in a slower convergence. Nevertheless, a good news for SVRG is that the label noise (term 3©) is scaled by bN , which lets SVRG converge to a lower loss value than SGD – a strict advantage of SVRG compared to SGD.\nTo summarize, the variance reduction from SVRG comes at a price of slower gradient descent shrinkage. In contrast, SVRG is able to converge to a lower loss value. This motivates the question, which algorithm to use given a certain computational cost? We hence performed a thorough investigation through numerical simulation as well as experiments on real datasets in Sec. 4.\nSimilarly done for SGD, we decompose C(θ) as a function of m(θ) and derive the following decay rate for SVRG.\nTheorem 4 (SVRG Dynamics and Decay rate). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0,Σ) with Σ diagonal and θ∗ = 0, the dynamics for SVRG in m(θ) is given by:\nm(θ((m+1)T )) =λ(α, b, T,N,Σ)m(θ(mT )) + (I−RT )(I−R)−1 n N , (10)\nλ(α, b, T,N,Σ) =RT − ( T−1∑ k=0 RkQP−k ) PT−1 + (I−RT )(I−R)−1F. (11)" }, { "heading": "3.1 THE DYNAMICS WITHOUT LABEL NOISE", "text": "In the absence of the label noise (i.e., σy = 0), we observe that both SGD and SVRG enjoy linear convergence as a corollary of Theorem 2 and Theorem 4: Corollary 5. Without the label noise, the dynamics of the second moment following SGD is given by,\nm(θ(t)) = Rtm(θ(0)),\nand the dynamics of SVRG is given by,\nm(θ((m+1)T )) = λ(α, b, T,N,Σ)m(θ(mT )),\nwhere λ is defined in Eq.( 11).\nNote that similar results have been shown in the past (Ma et al., 2018; Vaswani et al., 2019; Schmidt & Roux, 2013), where a general condition known as “interpolation regime” is used to show linear convergence of SGD. Specifically they assume that ∇Li(θ∗) = 0 for all i, and our setting without label noise clearly also belongs to this regime. This setting also has practical implications, as one can treat training overparameterized neural networks as in interpolation regime. This motivates the investigation of the convergence rate of SGD and SVRG without label noise, and was also extensively studied in the experiments detailed as follows." }, { "heading": "4 EXPERIMENTS", "text": "In Sec. 3 we discussed a critical dilemma for SVRG that is facing a choice between effective variance reduction and faster gradient descent shrinkage. At the same time, it enjoys a strict advantage over SGD as it converges to a lower loss. We define the total computational cost as the total number of back-propagations performed. Similarly, per-iteration computational cost refers to the number of back-propagations performed per iteration. In this section, we study the question, which algorithm converges faster given certain total computational cost? We study this question for both the underparameterized and the overparameterized regimes.\nOur investigation consists of two parts. First, numerical simulations of the theoretical convergence rates (Sec. 4.1). Second, experiments on real datasets (Sec. 4.2). In both parts, we first fix the per-iteration computational cost. For SGD, the per-iteration computational budge is equal to the minibatch size. We picked three batch size {64, 128, 256}. Denote the batchsize of SGD as b, the equivalent batch size for SVRG is b′ = 12 (1 − N Tb )b. We then perform training with an extensive set of hyperparameters for each method with each per-iteration computational cost. For SGD, the hyperparameter under consideration is the learning rate α. For SVRG, besides the learning rate,\nwe also ran over a set of snapshot intervals T . After running over all sets of hyperparameters, we gather all training curves of all hyperparameters. We then summarize the performance for each algorithm by plotting the lower bound of all training curves, i.e. each point (l, t) on the curve showed the minimum loss l at time step t over all hyperparameters. We compared the two methods under different computational cost.\nRemarkably, we found in many cases phenomenon predicted by our theory matches with observations in practice. Our experiments suggested there is a trade-off between the computational cost and the convergence speed for underparameterized neural networks. SVRG outperformed SGD after a few epochs in this regime. Interestingly, in the case of overparameterized model, a setting that matches modern day neural networks training, SGD strictly dominated SVRG by showing a faster convergence throughout the entire training." }, { "heading": "4.1 SIMULATIONS ON NOISY LEAST SQUARES REGRESSION MODEL", "text": "We first performed numerical simulations of the dynamics derived in Theorem 2 for SGD and Theorem 4 for SVRG. We picked a data distribution, with data dimension d = 100, and the spectrum of Σ is given by an exponential decay schedule from 1 to 0.01. For both methods, we picked 50 learning rate from 1.5 to 0.01 using a exponential decay schedule. For SVRG, we further picked a set of snapshot intervals for each learning rate: {256, 128, 64}. We performed simulations in both underparameterized and overparameterized setting (namely with and without label noise), and plotted the lower bound curves over all hyperparameters at Figure 2. The x-axis represents the normalized total computational cost, denoting tbN−1, which is equivalent to the notion of an epoch in finite dataset setting. And the loss in Figure 2 does not contain bayes error (i.e. 12σ 2 y).\nWe have the following observations from our simulations. In the case with label noise, the plot demonstrated an explicit trade-off between computational cost and convergence speed. We observed a crossing point of between SGD and SVRG appear, indicating SGD achieved a faster convergence speed in the first phase of the training, but converged to a higher loss, for all per-iteration compute cost. Hence it shows that one can trade more compute cost for convergence speed by choosing SGD than SVRG, and vice versa. Interestingly, we found that the the per-iteration computational cost does not seem to affect the time crossing point takes place. For all these three costs, the crossing points in the plot are at around the same time: 5.5 epochs. In the case of no label noise, we observed both methods achieved linear convergence, while SGD achieved a much faster rate than SVRG, showing absolute dominance in this regime." }, { "heading": "4.2 BENCHMARK DATASETS", "text": "In this section, we performed a similar investigation as in the last section, on two standard machine learning benchmark datasets: MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky, 2009). We present the results from underparameterized setting first, followed by the overparameterized setting. We performed experiments with three batchsizes for SGD: {64, 128, 256}, and an equivalent batchsize for SVRG. For each batch size, we pick 8 learning rates varying from 0.3 to 0.001 following an exponential schedule. Additionally, we chose three snapshot intervals for every computational budget, searching over the best snapshot interval given the data. Hence for each per-iteration computational cost {64, 128, 256}, there are 24 groups of experiments for SVRG and 8 groups of experiments for SGD." }, { "heading": "4.2.1 UNDERPARAMETERIZED SETTING", "text": "For MNIST, we trained two underparameterized model: 1. logistic regression 784− 10 2. a underparameterized two layer MLP 784−10−10 where the hidden layer has 10 neurons. For CIFAR-10, we chose a underparameterized convolutional neural network model, which has only two 8-channel convolutional layers, and one 16-channel convolutional layer with one additional fully-connected layer. Filter size is 5. The lowest loss achieved over all hyperparameters for these models for each per-iteration computational cost are shown in Figure 3.\nFrom these experiments, we observe that on MNIST, the results with underparameterized model were consistent with the dynamics simulation of noisy least squares regression model with label noise. First of all, SGD converged faster in the early phase, resulting in a crossing point between SGD and SVRG. It showed a trade-offs between computational cost and convergence speed: before the crossing point, SGD converged faster than SVRG; after crossing point, SVRG attained a lower loss. In addition, in Fig 3a, all the crossing points of three costs matched at the same epoch (around 5), which was also consistent with the our findings with noisy least squares regression model. On CIFAR-10, SGD achieved slightly faster convergence in the early phase, but was surpassed by SVRG around 17− 25 epochs, again showing a trade-off between compute and speed." }, { "heading": "4.2.2 THE OVERPARAMETERIZED SETTING", "text": "Lastly, we compared SGD and SVRG on MNIST and CIFAR-10 using overparameterized models. For MNIST, we used a MLP with two hidden layers, each layer having 1024 neurons. For CIFAR10, we chose a large convolutional network, which has one 64-channel convolutional layer, one 128-channel convolutional layer followed by one 3200 to 1000 fully connected layer and one 1000 to 10 fully connected layer.\nThe lowest loss achieved over all hyperparameters for these models for each per-iteration computational cost are shown in Figure 4. For training on MNIST, both SGD and SVRG attained close to zero training loss. The results were again consistent with our dynamics analysis on the noisy linear regression model without label noise. SGD has a strict advantage over SVRG, and achieved a much faster convergence rate than SVRG throughout the entire training. As for CIFAR-10, we stopped the training before either of the two got close to zero training loss due to lack of computing time. But we clearly see a trend of approaching to zero loss. Similarly, we also had the same observations as before, where SGD outperforms SVRG, confirms the limitation of variance reduction in the overparameterized regime." }, { "heading": "5 DISCUSSION", "text": "In this paper, we studied the convergence properties of SGD and SVRG in the underparameterized and overparameterized settings. We provided a non-asymptotic analysis of both algorithms. We then investigated the question about which algorithm to use under certain total computational cost. We performed numerical simulations of dynamics equations for both methods, as well as extensive experiments on the standard machine learning datasets, MNIST and CIFAR-10. Remarkably, we found in many cases phenomenon predicted by our theory matched with observations in practice. Our experiments suggested there is a trade-off between the computational cost and the convergence speed for underparameterized neural networks. SVRG outperformed SGD after the first few epochs in this regime. In the case of overparameterized model, a setting that matches with modern day neural networks training, SGD strictly dominated SVRG by showing a faster convergence for all computational cost." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B LEMMA ABOUT GRADIENT COVARIANCE", "text": "Lemma 6 (Gradient Covariance). Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0,Σ) with Σ diagonal and θ∗ = 0, we have\ndiag(E[xx>θ(t)θ(t) > xx>]) = ( 2Σ2 + diag(Σ)diag(Σ)> ) E[θ(t) ◦2 ]\nE[XbX>b θ(t)θ(t) > XbX > b ]− ΣE[θ(t)θ(t) > ]Σ = 1\nb\n( E[xx>θ(t)θ(t) > xx>]− ΣE[θ(t)θ(t) > ]Σ )\nProof. In the following proof, we define the entry-wise p power on vector x as x◦p. Under our assumption µ = 0, θ∗ = 0 and Σ diagonal, for x ∼ N (0,Σ),x ∈ Rd, we have\nEx,θ(t) [xx>θ(t)θ(t) > xx>] =2Σ2E[θ(t)θ(t) > ] + Tr(ΣE[θ(t)θ(t) > ])Σ. (12)\nEq. 12 is a conclusion from The Matrix Cookbook (See section 8.2.3 in Petersen & Pedersen (2012)).\nThen, for its main diagonal term, we have:\ndiag(Ex,θ(t) [xx>θ(t)θ(t) > xx>]) =2Σ2E[θ(t) ◦2 ] + diag(Σ)diag(Σ)>E[θ(t) ◦2 ] (13) = ( 2Σ2 + diag(Σ)diag(Σ)> ) m(θ(t)) (14)\nHence, for C ( M(θ(t)) ) , we have:\ndiag ( C ( M(θ(t)) )) = ( Σ2 + diag(Σ)diag(Σ)> ) m(θ(t)) (15)\nwhich is the first conclusion of Theorem 2.\nNotice, this conclusion can be generalized to any square matrix A not only for E[θ(t)θ(t)>], i.e. for any square matrix A ∈ Rd×d, with x ∼ N (0,Σ) and Σ diagonal, since\nEx[xx>Axx>] =2Σ2A+ Tr(ΣA)Σ. (16)\nwe have\ndiag(Ex[xx>Axx>]) =2Σ2diag(A) + diag(Σ)diag(Σ)>diag(A) (17) = ( 2Σ2 + diag(Σ)diag(Σ)> ) diag(A) (18)\nFor batch gradient XbX>b θ (t), we have\nE[XbX>b θ(t)θ(t) > XbX > b ] = 1 b2 E [ ( ∑ i∈[N ]b xix > i )θ (t)θ(t) > ( ∑ i∈[N ]b xix > i ) ]\n(19)\n= 1\nb2 bE[xx>θ(t)θ(t)xx>] +\n1 b2 (b2 − b)E[xx>]E[θ(t)θ(t)]E[xx>]\n(20)\n= 1\nb E[xx>θ(t)θ(t)\n> xx>] + b− 1 b ΣE[θ(t)θ(t) > ]Σ (21)\nwhere [N ]b is the index set of Xb." }, { "heading": "C THE PROOF OF THEOREM 2", "text": "Theorem 2. Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0,Σ) with Σ diagonal and θ∗ = 0, we can express C(M(θ)) as a function of m(θ):\ndiag ( C ( M(θ) )) = ( Σ2 + diag(Σ)diag(Σ)> ) m(θ)\nThen we derive following dynamics of expected second moment of θ:\nm(θ(t)) = Rt ( m(θ(0))− (I−R) −1n\nb\n) +\n(I−R)−1n b ,\nProof.\nθ(t+1)θ(t+1) > =(I− αXbX>b )θ(t)θ(t) > (I− αXbX>b ) + α√ b (I− αXbX>b )θ(t) >b X>b (22)\n+ α√ b Xb bθ (t)>(I− αXbX>b ) + α2 b Xb b > b X > b (23)\nSince, E[ b] = 0, and b is independent with θ(t), Xb, we have:\nE[Xb bθ(t) > (I− αXbX>b )] = 0 (24)\nE[(I− αXbX>b )θ(t) >b X>b ] = 0 (25) and,\nE b,Xb [ α2\nb Xb b\n> b X > b\n] = α2\nb EXb\n[ Xb ( E[ b >b ] ) Xb ] (26)\n= α2\nb EXb\n[ Xb ( σ2yI ) Xb ] (27)\n= α2σ2y b Σ. (28)\nSince Xb is independent with θ(t), we have: E [ (I− αXbX>b )θ(t)θ(t) > (I− αXbX>b ) ] = (I− αΣ)E[θ(t)θ(t) > ](I− αΣ) (29)\n+ α2 ( E[XbXb>θ(t)θ(t) > XbXb >]− ΣE[θ(t)θ(t) > ]Σ ) . (30)\nThus,\nE[θ(t+1)θ(t+1) > ] =(I− αΣ)E[θ(t)θ(t) > ](I− αΣ) + α 2\nb (E[xx>θ(t)θ(t)\n> xx>] (31)\n− ΣE[θ(t)θ(t) > ]Σ) + α2σ2y b Σ (32)\n=(I− αΣ)E[θ(t)θ(t) > ](I− αΣ) + α 2\nb C(M(θ(t))) + α2σ2y b Σ (33)\nFor its diagonal term, we have:\nm(θ(t+1)) =diag(E[θ(t+1)θ(t+1) > ]) (34)\n= ( (I− αΣ)2 + α 2\nb (Σ2 + diag(Σ)diag(Σ)>)\n) m(θ(t)) +\nα2σ2y b diag(Σ) (35)\n=R ·m(θ(t)) + 1 b n (36)\nThis formula can be written as : m(θ(t+1)) + b−1(R− I)−1n = R ( m(θ(t)) + b−1(R− I)−1n ) (37)\nm(θ(t+1)) = Rt+1 ( m(θ0) + b −1(R− I)−1n ) − b−1(R− I)−1n, (38)\nwhere R = (I− αΣ)2 + α2b−1(Σ2 + diag(Σ)diag(Σ)>), n = α2σ2ydiag(Σ). (39)" }, { "heading": "D THE PROOF OF LEMMA 3", "text": "Lemma 3. The dynamics of the second moment of the iterate following SVRG update rule is given by,\nM(θ(mT+t+1)) = (I− αΣ)M(θ(mT+t))(I− αΣ)︸ ︷︷ ︸ 1© gradient descent shrinkage + α2\nb C ( M(θ(mT+t)) )\n︸ ︷︷ ︸ 2© input noise\n+ α2σ2y N\nΣ︸ ︷︷ ︸ 3© label noise\n+ α2 N + b Nb C ( M(θ(mT )) )\n︸ ︷︷ ︸ 4©variance due to g̃(mT+t)\n−α 2\nb\n( C ( M(θ(mT ))P t ) + C ( P tM(θ(mT )) )) ︸ ︷︷ ︸\n5© Variance reduction from control variate\nProof. For SVRG update rule Eq. 8, we have: θ(mT+t+1) = ( I− αXbX>b ) θ(mT+t) + α ( XbX > b −XNX>N ) θ(mT ) +\nα√ N XN N (40)\nUsing the update rule of SVRG, we can get the outer product of parameters as:\nθ(mT+t+1)θ(mT+t+1) >\n(41)\n= (I− αXbX>b )θ(mT+t)θ(mT+t) > (I− αXbX>b ) (42)\n+ α(I− αXbX>b )θ(mT+t)θ(mT ) > (XbX > b −XNX>N ) (43)\n+ α(XbX > b −XNX>N )θ(mT )θ(mT+t) > (I− αXbX>b ) (44)\n+ α2(XbX > b −XNX>N )θ(mT )θ(mT ) > (XbX > b −XNX>N ) (45)\n+ α√ N XN Nθ (mT+t)>(I− αXbX>b ) + α2√ N XN Nθ (mT )>(XbX > b −XNX>N ) (46)\n+ α√ N (I− αXbX>b )θ(mT+t) >NXN > + α2√ N (XbX > b −XNX>N )θ(mT ) >NXN > (47)\n+ α2\nN XN N\n> NXN > (48)\nLikewise, since E[ N ] = 0 and N is independent with Xb , XN and θ(t), we have the expectation of equation 46, equation 47 equal to 0. And same as SGD, we also have\nE N ,XN [XN N N>X>N ] =EXN [ XNE N [ N >N ]X>N ] (49)\n= E [ XN ( σ2yI ) X>N ] (50)\n= σ2yΣ (51)\nThen, we give a significant formula about the expectation of θ(mT+t)θ(mT ), utilized to derive the expected term related to variance reduction amount.\nθ(mT+t+1)θ(mT ) > =(I− αXbX>b )θ(mT+t)θ(mT ) >\n(52)\n+ α(XbX > b −XNX>N )θ(mT )θ(mT )\n> + α√ N XN Nθ (mT )> (53)\nSince E[XNX>N ] = E[XbX>b ] = Σ, and N is independent with XN and θ(mT ), the expectation of Eq. 53 is equal to 0. Therefore,\nE[θ(mT+t+1)θ(mT ) > ] =(I− αΣ)E[θ(mT+t)θ(mT ) > ] = (I− αΣ)t+1 E[θ(mT )θ(mT ) > ] (54)\n=P t+1M(θ(mT )) (55)\nwhich suggests the covariance between ĝ(mT+t) and g̃(mT+t) is exponentially decayed.\nFor every other term appearing in Eq. 41, we have the following conclusions. First, similar with SGD, we have the formula about gradient descent shrinkage as:\nEx,θ[(I− αXbX>b )θ(mT+t)θ(mT+t) > (I− αXbX>b )] (56)\n= (I− αΣ)E[θ(mT+t)θ(mT+t) > ](I− αΣ) (57) + α2 ( E[XbXb>θ(mT+t)θ(mT+t) > XbXb >]− ΣE[θ(mT+t)θ(mT+t) > ]Σ ) (58)\n= (I− αΣ)E[θ(mT+t)θ(mT+t) > ](I− αΣ) (59) + α2b−1 ( Ex[xx>M(θ(mT+t))xx>]− ΣM(θ(mT+t))Σ ) (60)\nUsing Eq. 54, we have following conclusion for variance reduction term from control variate. We first take expectation over θ(mT+t)θ(mT ) > with Eq. 54 due to the independence among Xb, XN and θ.\nE [ α(I− αXbX>b )θ(mT+t)θ(mT ) > (XbX > b −XNX>N ) ] (61)\n= EXb,XN [ α(I− αXbX>b )P tM(θ(mT ))(XbX>b −XNX>N ) ] (62)\n= −α2b−1 ( Ex[xx>P tM(θ(mT ))xx>]− ΣP tM(θ(mT ))Σ ) (63)\nE [ α(XbX > b −XNX>N )θ(mT )θ(mT+t) > (I− αXbX>b ) ] (64)\n= EXb,XN [ α(XbX > b −XNX>N )M(θ(mT ))P t(I− αXbX>b ) ] (65)\n= −α2b−1 ( Ex[xx>M(θ(mT ))P txx>]− ΣM(θ(mT ))P tΣ ) (66)\nFor the forth term, which represents the variance of g̃(mT+t), we consider the independence between Xb and XN and get\nE [ α2(XbX > b −XNX>N )θ(mT )θ(mT ) > (XbX > b −XNX>N ) ] (67)\n= α2 N + b\nNb\n( Ex[xx>M(θ(mT ))xx>]− ΣM(θ(mT ))Σ ) (68)\nThus,\nM(θ(mT+t+1)) = ( I− αΣ ) M(θ(mT+t)) ( I− αΣ )\n(69) + α2b−1 ( Ex[xx>M(θ(mT+t))xx>]− ΣM(θ(mT+t))Σ ) (70)\n+ α2 N + b\nNb\n( Ex[xx>M(θ(mT ))xx>]− ΣM(θ(mT ))Σ ) (71)\n− α2b−1 ( Ex[xx>M(θ(mT ))P txx>]− ΣM(θ(mT ))P tΣ ) (72)\n− α2b−1 ( Ex[xx>P tM(θ(mT ))xx>]− ΣP tM(θ(mT ))Σ ) (73)\n+ α2σ2y N Σ (74)\nUnder our definition, it can be expressed as:\nM(θ(mT+t+1)) = (I− αΣ)M(θ(mT+t))(I− αΣ)︸ ︷︷ ︸ 1© gradient descent shrinkage + α2\nb C ( M(θ(mT+t)) )\n︸ ︷︷ ︸ 2© input noise\n+ α2σ2y N\nΣ︸ ︷︷ ︸ 3© label noise\n(75)\n+ α2 N + b Nb C ( M(θ(mT )) )\n︸ ︷︷ ︸ 4©variance due to g̃(mT+t)\n−α 2\nb\n( C ( M(θ(mT ))P t ) + C ( P tM(θ(mT )) )) ︸ ︷︷ ︸\n5© Variance reduction from control variate" }, { "heading": "E THE PROOF OF THEOREM 4", "text": "Theorem 4. Given the noisy linear regression objective function (Eq. 3), under the assumption that x ∼ N (0,Σ) with Σ diagonal and θ∗ = 0, the dynamics for SVRG in m(θ) is given by:\nm(θ((m+1)T )) =λ(α, b, T,N,Σ)m(θ(mT )) + (I−RT )(I−R)−1 n N ,\nλ(α, b, T,N,Σ) =RT − ( T−1∑ k=0 RkQP−k ) PT−1 + (I−RT )(I−R)−1F\nProof. Form lemma 3 and lemma 6, we can get:\nm(θ(mT+t+1)) = Rm(θ(mT+t))−QP tm(θ(mT )) + Fm(θ(mT )) +N−1n (76) where\nR = (I− αΣ)2 + α 2\nb (Σ2 + diag(Σ)diag(Σ)>) Q =\n2α2\nb (Σ2 + diag(Σ)diag(Σ)>), (77)\nF = α2(N + b)\nNb (Σ2 + diag(Σ)diag(Σ)>), P = I− αΣ, n = α2σ2ydiag(Σ). (78)\nRecursively expending the above formula from m(θ((m+1)T )) to m(θ(mT )), we can get the following result:\nm(θ((m+1)T )) (79) = R ( Rm(θ(mT+T−2))−QPT−2m(θ(mT )) + Fm(θ(mT )) +N−1n ) (80)\n−QPT−1m(θ(mT )) + Fm(θ(mT )) +N−1n (81)\n= R2m(θ(mT+T−2))− ( 1∑ k=0 RkQP−k ) PT−1m(θ(mT )) (82)\n+ ( 1∑ k=0 Rk )( Fm(θ(mT )) +N−1n ) (83)\n= · · · (84)\n= RTm(θ(mT ))− ( T−1∑ k=0 RkQP−k ) PT−1m(θ(mT )) (85)\n+ ( T−1∑ k=0 Rk )( Fm(θ(mT )) +N−1n ) (86)\n= RTm(θ(mT ))− ( T−1∑ k=0 RkQP−k ) PT−1Qm(θ(mT )) (87)\n+ (I−RT )(I−R)−1 ( Fm(θ(mT )) +N−1n ) (88)\nIn other word, Eq. 79 describe the dynamic of expected second moment of iterate between two nearby snapshots,\nm(θ((m+1)T )) = λ(α, b, T,N,Σ)m(θ(mT )) + (I−RT )(I−R)−1 n N , (89)\nwhere\nλ(α, b, T,N,Σ) = RT − ( T−1∑ k=0 RkQP−k ) PT−1 + (I−RT )(I−R)−1F (90)\nSince I−R and I−RT are commutable, I−R T I−R = (I−R) −1(I−RT ) = (I−RT )(I−R)−1" }, { "heading": "F THE PROOF OF COROLLARY 5", "text": "Corollary 5. Without the label noise, the dynamics of the second moment following SGD is given by,\nm(θ(t)) = Rtm(θ(0)),\nand the dynamics of SVRG is given by,\nm(θ((m+1)T )) = λ(α, b, T,N,Σ)m(θ(mT )),\nwhere λ is defined in Eq.( 11).\nProof. If without label noise, i.e. σ2y = 0, we can directly draw the corollary for the setting without label noise, based on Theorem 2 and Theorem 4. Setting σ2y = 0, we can draw:\nm(θ(t)) = Rtm(θ(0)),\nand the dynamics of SVRG is given by,\nm(θ((m+1)T )) = λ(α, b, T,Σ, N)m(θ(mT )),\nwhere λ is defined in Eq.( 11)." }, { "heading": "G THE SENSITIVITY OF N", "text": "In our theoretical analysis (Section 3), we evaluate a large batch gradient ḡ to control variance. That is because any number of data points can be directly sampled form the true distribution. But in order to compare the computational cost between SVRG and SGD, we set the number of data points used to calculate ḡ asN , which is slightly different with the original SVRG’s setup of full-batch gradient. Therefore, we evaluate the sensitivity of N to illustrate when N is beyond a threshold, it will cause little difference in convergence speed for SVRG.\nFrom figure 5a, we can tell N has little effect on the convergence speed of SVRG under the noisy least square model, but it determines the constant term of label noise in Eq. 9 which determines the level of final loss.\nBesides, we also compare large batch SGD to SVRG in Figure 5b under the computation budget b = 2048 with fixed snapshot interval T = 256 for SVRG, expentionally picking 50 learning rates from 1.5 to 0.01, varying b′, N according to b′ = 12 (1− N Tb )b." } ]
2,019
null
SP:6022b52e1e160bd034df1a7c71c6ca163bcf4dc0
[ "This paper proposes a novel form of surprise-minimizing intrinsic reward signal that leads to interesting behavior in the absence of an external reward signal. The proposed approach encourages an agent to visit states with high probability / density under a parametric marginal state distribution that is learned as the agent interacts with its environment. The method (dubbed SMiRL) is evaluated in visual and proprioceptive high-dimensional \"entropic\" benchmarks (that progress without the agent doing anything in order to prevent trivial solutions such as standing and never moving), and compared against two surprise-maximizing intrinsic motivation methods (ICM and RND) as well as to a reward-maximizing oracle. The experiments demonstrate that SMiRL can lead to more sensible behavior compared to ICM and RND in the chosen environments, and eventually recover the performance of a purely reward-maximizing agent. Also, SMiRL can be used for imitation learning by pre-training the parametric state distribution with data from a teacher. Finally, SMiRL shows the potential of speeding up reinforcement learning by using intrinsic motivation as an additional reward signal added to the external task-defining reward.", "This paper proposes Surprise Minimizing RL (SMiRL), a conceptual framework for training a reinforcement learning agent to seek out states with high likelihood under a density model trained on visited states. They qualitatively and quantitatively explore various aspects of the behaviour of these agents and argue that they exhibit a variety of favourable properties. They also compare their surprise minimizing algorithm with a variety of novelty-seeking algorithms (which can be considered somewhat the opposite) and show that in certain cases surprise minimization can result in more desirable behaviour. Finally, they show that using surprise minimization as an auxiliary reward can speed learning in certain settings." ]
All living organisms struggle against the forces of nature to carve out niches where they can maintain relative stasis. We propose that such a search for order amidst chaos might offer a unifying principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing RL (SMiRL). SMiRL trains an agent with the objective of maximizing the probability of observed states under a model trained on all previously seen states. The resulting agents acquire several proactive behaviors to seek and maintain stable states such as balancing and damage avoidance, that are closely tied to the environment’s prevailing sources of entropy, such as winds, earthquakes, and other agents. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, and control a humanoid to avoid falls, without any task-specific reward supervision. We further show that SMiRL can be used together with a standard task reward to accelerate reward-driven learning.
[]
[ { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "arXiv preprint arXiv:1703.01732,", "year": 2017 }, { "authors": [ "Yusuf Aytar", "Tobias Pfaff", "David Budden", "Thomas Paine", "Ziyu Wang", "Nando de Freitas" ], "title": "Playing hard exploration games by watching youtube", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": null, "year": 1909 }, { "authors": [ "Trapit Bansal", "Jakub Pachocki", "Szymon Sidor", "Ilya Sutskever", "Igor Mordatch" ], "title": "Emergent complexity via multi-agent competition", "venue": "arXiv preprint arXiv:1710.03748,", "year": 2017 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "In Proceedings of the 24th International Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-Scale Study of Curiosity-Driven Learning", "venue": "2018a. URL http://arxiv.org/abs/", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": null, "year": 2018 }, { "authors": [ "Nuttapong Chentanez", "Andrew G Barto", "Satinder P Singh" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Ashley D Edwards", "Himanshu Sahni", "Yannick Schroecker", "Charles L Isbell" ], "title": "Imitating latent policies from observation", "venue": "arXiv preprint arXiv:1805.07914,", "year": 2018 }, { "authors": [ "Karl Friston" ], "title": "The free-energy principle: a rough guide to the brain", "venue": "Trends in cognitive sciences,", "year": 2009 }, { "authors": [ "Karl J. Friston", "Jean Daunizeau", "Stefan J. Kiebel" ], "title": "Reinforcement learning or active inference", "venue": "PLOS ONE, 4(7):1–13,", "year": 2009 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "VIME: Variational Information Maximizing Exploration", "venue": null, "year": 2016 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2016 }, { "authors": [ "Youngjin Kim", "Wontae Nam", "Hyunwoo Kim", "Ji-Hoon Kim", "Gunhee Kim" ], "title": "Curiosity-bottleneck: Exploration by distilling task-specific novelty", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Alexander S. Klyubin", "Daniel Polani", "Chrystopher L. Nehaniv" ], "title": "All else being equal be empowered", "venue": "Advances in Artificial Life,", "year": 2005 }, { "authors": [ "Joel Lehman", "Kenneth O Stanley" ], "title": "Abandoning objectives: Evolution through the search for novelty alone", "venue": "Evolutionary computation,", "year": 2011 }, { "authors": [ "S. Levine", "C. Finn", "T. Darrell", "P. Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "YuXuan Liu", "Abhishek Gupta", "Pieter Abbeel", "Sergey Levine" ], "title": "Imitation from observation: Learning to imitate behaviors from raw video via context translation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Manuel Lopes", "Tobias Lang", "Marc Toussaint", "Pierre-Yves Oudeyer" ], "title": "Exploration in modelbased reinforcement learning by empirically estimating learning progress", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Shakir Mohamed", "Danilo Jimenez Rezende" ], "title": "Variational information maximisation for intrinsically motivated reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven Exploration by Self-supervised Prediction", "venue": null, "year": 2017 }, { "authors": [ "Deepak Pathak", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Self-Supervised Exploration via Disagreement", "venue": null, "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Curious model-building control systems", "venue": "In Proc. international joint conference on neural networks,", "year": 1991 }, { "authors": [ "Eric D Schneider", "James J Kay" ], "title": "Life as a manifestation of the second law of thermodynamics", "venue": "Mathematical and computer modelling,", "year": 1994 }, { "authors": [ "Erwin Schrödinger" ], "title": "What is life? The physical aspect of the living cell and mind", "venue": null, "year": 1944 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Pranav Shyam", "Wojciech Jaśkowski", "Faustino Gomez" ], "title": "Model-based active exploration", "venue": "arXiv preprint arXiv:1810.12162,", "year": 2018 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Susanne Still", "Doina Precup" ], "title": "An information-theoretic approach to curiosity-driven reinforcement learning", "venue": "Theory in Biosciences,", "year": 2012 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Yi Sun", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Planning to be surprised: Optimal bayesian exploration in dynamic environments", "venue": "In International Conference on Artificial General Intelligence,", "year": 2011 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Behavioral cloning from observation", "venue": "arXiv preprint arXiv:1805.01954,", "year": 2018 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Generative adversarial imitation from observation", "venue": "arXiv preprint arXiv:1807.06158,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The general struggle for existence of animate beings is not a struggle for raw materials, nor for energy, but a struggle for negative entropy.\n(Ludwig Boltzmann, 1886)\nAll living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them (Boltzmann, 1886; Schrödinger, 1944; Schneider & Kay, 1994; Friston, 2009). Humans, for example, go to great lengths to shield themselves from surprise — we band together in millions to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat and cold, wind and storm. The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats. Motivated by this, we ask: could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents?\nOur method therefore addresses the unsupervised reinforcement learning problem: how might an agent in an environment acquire complex behaviors and skills with no external supervision? This central problem in artificial intelligence has evoked several candidate solutions, largely focusing on novelty-seeking behaviors (Schmidhuber, 1991; Lehman & Stanley, 2011; Still & Precup, 2012; Bellemare et al., 2016; Houthooft et al., 2016; Pathak et al., 2017). In simulated worlds, such as video games, novelty-seeking intrinsic motivation can lead to interesting and meaningful behavior. However, we argue that these sterile environments are fundamentally lacking compared to the real world. In the real world, natural forces and other agents offer bountiful novelty. The second law of thermodynamics stipulates ever-increasing entropy, and therefore perpetual novelty, without even requiring any agent intervention. Instead, the challenge in natural environments is homeostasis: discovering behaviors that enable agents to maintain an equilibrium, for example to preserve their bodies, their homes, and avoid predators and hunger. Even novelty seeking behaviors may emerge naturally as a means to maintain homeostasis: an agent that is curious and forages for food in unlikely places might better satisfy its hunger.\nWe formalize allostasis as an objective for reinforcement learning based on surprise minimization (SMiRL). In highly entropic and dynamic environments with undesirable forms of novelty, minimizing surprise (i.e., minimizing novelty) causes agents to naturally seek a stable equilibrium. Natural environments with winds, earthquakes, adversaries, and other disruptions already offer a steady stream of novel stimuli, and an agent that minimizes surprise in these environments will act and explore in order to find the means to maintain a stable equilibrium in the face of these disturbances.\nSMiRL is simple to describe and implement: it works by maintaining a density p(s) of visited states and training a policy to act such that future states have high likelihood under p(s). This interaction scheme is shown in Figure 1(right) Across many different environments, with varied disruptive forces, and in agents with diverse embodiments and action spaces, we show that this simple approach induces useful equilibrium-seeking behaviors. We show that SMiRL agents can solve Tetris, avoid fireballs in Doom, and enable a simulated humanoid to balance and locomote, without any explicit task reward. More pragmatically, we show that SMiRL can be used together with a task reward to accelerate standard reinforcement learning in dynamic environments, and can provide a simple mechanism for imitation learning. SMiRL holds promise for a new kind of unsupervised RL method that produces behaviors that are closely tied to the prevailing disruptive forces, adversaries, and other sources of entropy in the environment. Videos of our results are available at https://sites.google.com/view/surpriseminimization" }, { "heading": "2 SURPRISE MINIMIZING AGENTS", "text": "We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking to preserve order amidst chaos. In complex natural environments with disruptive forces that tend to naturally increase entropy, which we refer to as entropic environments, minimizing surprise over an agent’s lifetime requires taking action to reach stable states, and often requires acting continually to maintain homeostasis and avoid surprise. The long term effects of actions on the agent’s surprise can be complex and somewhat counterintuitive, especially when we consider that actions not only change the state that the agent is in, but also its beliefs about which states are more likely. The combination of these two processes induce the agent to not only seek states where p(s) is large, but to also visit states so as to alter p(s), in order to receive larger rewards in the future. This “meta” level reasoning can result in behaviors where the agent might actually visit new states in order to make them more familiar. An example of this is shown in Figure 1 where in order to avoid the disruptions from the changing weather an agent needs to build a shelter or home to protect itself and decrease its observable surprise. The SMiRL formulation relies on disruptive forces in the environment to avoid collapse to degenerate solutions, such as staying in a single state s0. Fortunately, natural environments typically offer no shortage of such disruption." }, { "heading": "2.1 SURPRISE MINIMIZATION PROBLEM STATEMENT", "text": "To instantiate SMiRL, we design a reinforcement learning agent with a reward proportional to how familiar its current state is based on the history of states it has experienced during its “life,” which corresponds to a single episode. Formally, we assume a fully-observed controlled Markov process\n(CMP), though extensions to partially observed settings can also be developed. We use st to denote the state at time t, and at to denote the agent’s action, ρ(s0) to denote the initial state distribution, and T (st+1|st, at) to denote the transition dynamics. The agent has access to a datasetDt = {s1, . . . , st} of all states experienced so far. By fitting a generative model pθt(s) with parameters θt to this dataset, the agent obtains an estimator that can be used to evaluate the negative surprise reward, given by\nrt(s) = log pθt(s) (1)\nWe denote the fitting process as θt = U(Dt). The goal of a SMiRL agent is to maximize the sum∑ t log pθt(st+1). Since the agent’s actions affect the future Dt and thus the future θt’s, the optimal policy does not simply visit states that have a high pθt(s) now, but rather those states that will change pθt(s) such that it provides high likelihood to the states that it sees in the future." }, { "heading": "2.2 TRAINING SMIRL AGENTS", "text": "Algorithm 1 Training a SMiRL agent with RL 1: Initialize policy parameters φ 2: Initialize RL algorithm RL 3: for each episode = 1, 2, . . . do 4: s0 ∼ ρ(s0) . Initial state distribution. 5: D0 ← {s0} . Reset state history. 6: for each t = 0, 1, . . . , T do 7: θt ← U(Dt) . Fit density model. 8: at ∼ πφ(at|st, θt, t) . Run policy. 9: st+1 ∼ T (st+1|st, at) . Transition dynamics. 10: rt ← log pθt(st+1) . Familiarity reward. 11: Dt+1 ← Dt ∪ {st+1} . Update state history. 12: end for each 13: φ← RL(φ, s[0:T ], θ[0:T ], |D|[0:T ], a[0:T ], r[0:T ]) 14: end for each We now present a practical reinforcement learning algorithm for surprise minimization. Recall that a critical component of SMiRL is reasoning about the effect of actions on future states that will be added to D, and their effect on future density estimates – e.g., to understand that visiting a state that is currently unfamiliar and staying there will make that state familiar, and therefore lead to higher rewards in the long run. This means that the agent must reason not only about the unknown MDP dynamics, but also the dynamics of the density model pθ(s) trained on D. In our algorithm, we accomplish this via an episodic training procedure, where the agent is trained over many episodes and D is reset at the beginning of each episode to simulate a new lifetime. Through this procedure, SMiRL learns the parameters φ of the agent’s policy πφ for a fixed horizon. To learn this the policy must be conditioned on some sufficient statistic of Dt, since the reward rt is a function of Dt. Having trained parameterized generative models pθt as above on all states seen so far, we condition π on θt and |Dt|. This implies an assumption that θt and |Dt| represent the sufficient statistics necessary to summarize the contents of the dataset for the policy, and contain all information required to reason about how pθ will evolve in the future. Of course, we could also use any other summary statistic, or even read in the entirety of Dt using a recurrent model. In the next section, we also describe a modification that allows us to utilize a deep density model without conditioning π on a high-dimensional parameter vector.\nAlgorithm 1 provides the pseudocode. SMiRL can be used with any reinforcement learning algorithm, which we denote RL in the pseudocode. As is standard in reinforcement learning, we alternate between sampling episodes from the policy (lines 6-12) and updating the policy parameters (line 13). The details of the updates are left to the specific RL algorithm, which may be on or off-policy. During each episode, as shown in line 11, D0 is initialized with the first state and grows as each state visited by the agent is added to the dataset. The parameters θt of the density model are fit to Dt at each timestep to both be passed to the policy and define the reward function. At the end of the episode, DT is discarded and the new D0 is initialized." }, { "heading": "2.3 STATE DENSITY ESTIMATION WITH LEARNED REPRESENTATIONS", "text": "While SMiRL may in principle be used with any choice of model class for the generative model pθ(s), this choice must be carefully made in practice. As we show in our experiments, relatively simple distribution classes, such as products of independent marginals, suffice to run SMiRL in simple environments with low-dimensional state spaces. However, it may be desirable in more complex\nenvironments to use more sophisticated density estimators, especially when learning directly from high-dimensional observations such as images.\nIn particular, we propose to use variational autoencoders (VAEs) (Kingma & Welling, 2014) to learn a non-linear compressed state representation and facilitate estimation of pθ(s) for SMiRL. A VAE is trained using the standard loss to reconstruct states s after encoding them into a low-dimensional normal distribution qω(z|s) through the encoder q with parameters ω. A decoder pψ(s|z, ) with parameters ψ computes s from the encoder output z. During this training process, a KL divergence loss between the prior p(z) and qω(z|s) is used to keep this distribution near the standard normal distribution. We described a VAE-based approach for estimating the SMiRL surprise reward. In our implementation, the VAE is trained online, with VAE updates interleaved with RL updates. Training a VAE requires more data than the simpler density models that can easily be fit to data from individual episodes. We propose to overcome this by not resetting the VAE parameters between training episodes. Instead, we train the VAE across episodes. Instead of passing all VAE parameters to the SMiRL policy, we track a separate episode-specific distribution pθt(z), distinct from the VAE prior, over the course of each episode. pθt(z) replaces pθt(s) in the SMiRL algorithm and is fit to only that episode’s state history. We represent pθt(z) as a vector of independent normal distributions , and fit it to the VAE encoder outputs. This replaces the density estimate in line 10 of Algorithm 1. Specifically, the corresponding update U(Dt) is performed as follows:\nz0, . . . , zt = E[qω(z|s)] for s ∈ Dt\nµ =\n∑t j=0 zj\nt+ 1 , σ =\n∑t j=0(µ− zj)2\nt+ 1\nθt = [µ, σ].\nTraining the VAE online, over all previously seen data, deviates from the recipe in the previous section, where the density model was only updated within an episode. However, this does provide for a much richer state density model, and the within-episode updates to estimate pθt(z) still provide our method with meaningful surprise-seeking behavior. As we show in our experiments, this can improve the performance of SMiRL in practice." }, { "heading": "3 ENVIRONMENTS", "text": "We evaluate SMiRL on a range of environments, from video game domains to simulated robotic control scenarios. These are rich, dynamic environments — the world evolves automatically even without agent intervention due to the presence of disruptive forces and adversaries. Note that SMiRL relies on such disruptions to produce meaningful emergent behavior, since mere inaction would otherwise suffice to achieve homeostasis. However, as we have argued above, such disruptions are also an important property of most real world environments. Current RL benchmarks neglect this, focusing largely on unrealistically sterile environments where the agent alone drives change (Bellemare et al., 2015; Brockman et al., 2016). Therefore, our choices of environments, discussed below, are not solely motivated by suitability to SMiRL; rather, we aim to evaluate unsupervised RL approaches, ours as well as others, in these more dynamic environments.\nTetris. The classic game of Tetris offers a naturally entropic environment — the world evolves according to its own rules and dynamics even in the absence of coordinated behavior of the agent, piling up pieces and filling up the board. It therefore requires active intervention to maintain homeostasis. We consider a 4 × 10 Tetris board with tromino shapes (composed of 3 squares), as shown in Figure 2a. The observation is a binary image of the current board with one pixel per square, as well as an indicator for the type of shape that will appear next. Each action denotes one of the 4 columns in which to drop the shape and one of 4 shape orientations. For evaluation, we measure how many rows the agent clears, as well as how many times the agent dies in the game by allowing the blocks to reach the top of the board, within the max episode length of 100. Since the observation is a binary image, we model p(s) as independent Bernoulli. See Appendix A for details.\nVizDoom. We consider two VizDoom environments from Kempka et al. (2016): TakeCover and DefendTheLine. TakeCover provides a dynamically evolving world, with enemies that appear over time and throw fireballs aimed at the player (Kempka et al., 2016). The observation space consists of the 4 previous grayscale first-person image observations, and the action space consists of moving left or right. We evaluate the agent based on how many times it is hit by fireballs, which we term\nthe “damage” taken by the agent. Images from the TakeCover environment are shown in Fig 2b and Fig ??.\nIn DefendTheLine, additional enemies can move towards the player, and the player can shoot the enemies. The agent starts with limited ammunition. This environment provides a “survival” reward function (r = 1 for each timestep alive) and performance is measured by how long the agent survives in the environment. For both environments, we model p(s) as independent Gaussian over the pixels. See Appendix A for details.\nminiGrid Is a navigation task where the agent has a partial observation of the environment shown by the lighter gray area around the red agent in Figure 2d. The agent needs to navigate down the hallways to escape the enemy agents (blue) to reach the safe room on the right the enemies can not enter, through a randomly placed door.\nSimulated Humanoid robots. In the last set of environments, a simulated planar Humanoid robot is placed in situations where it is in danger of falling. The action consists of the PD targets for each of the joints. The state space comprises the rotation of each joint and the linear velocity of each link. We evaluate several versions of this task, which are shown in Figure 2. The Cliff tasks starts the agent at the edge of a cliff, in a random pose and with a forward velocity of 1 m/s. Falling off the cliff leads to highly irregular and unpredictable configurations, so a surprise minimizing agent will want to learn to stay on the cliff. In the Treadmill environment, the robot starts on a platform that is moving at 1 m/s backwards; an agent will be carried backwards unless it learns some locomotion. The Pedestal environment is designed to show that SMiRL can learn a more active balancing policy. In this environment the agent starts out on a thin pedestal and random forces are applied to the robots links and boxes of random size are thrown at the agent. The Walk domain is used to evaluate the use of the SMiRL reward as a form of “stability reward” that assists the agent in learning how to walk while reducing the number of falls. This is done by initializing p(s) from example walking data and adding this to the task reward, as discussed in Section 4.2. The task reward in Walk is rwalk = exp((vd ∗ vd) ∗ −1.5), where vd is the difference between the x velocity and the desired velocity of 1 m/s. In these environments, we measure performance as the proportion of episodes with a fall. A state is classified as a fall if either the agent’s links, except for the feet, are touching the ground, or if the agent is −5 meters or more below the level of the platform or cliff. Since the state is continuous, we model p(s) as independent Gaussian; see Appendix A for details." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Our experiments aim to answer the following questions: (1) Can SMiRL learn meaningful and complex emergent behaviors in the environments described in Section 3? (2) Can we incorporate\ndeep generative models into SMiRL and use state densities in learned representation spaces? (3) Can SMiRL serve as a joint training objective to accelerate acquisition of reward-guided behavior, and does it outperform prior intrinsic motivation methods in this role? We also illustrate several applications of SMiRL, showing that it can accelerate task learning, provide for exploration with fewer damaging falls, and provide for elementary imitation. Videos of learned behaviors are available on the website https://sites.google.com/view/surpriseminimization/home" }, { "heading": "4.1 EMERGENT BEHAVIOR IN UNSUPERVISED LEARNING", "text": "First, we evaluate SMiRL on the Tetris, VizDoom, Cliff , and Treadmill tasks, studying its ability to generate purposeful coordinated behaviors after training using only the surprise minimizing objective, in order to answer question (1). The SMiRL agent demonstrates meaningful emergent behaviors in each of these domains. In the Tetris environment, the agent is able to learn proactive behaviors to eliminate rows and properly play the game. The agent also learns emergent game playing behaviour in the VizDoom environment, acquiring an effective policy for dodging the fireballs thrown by the enemies. In both of these environments, stochastic and chaotic events force the SMiRL agent to take a coordinated course of action to avoid unusual states, such as full Tetris boards or fireball explosions. In the Cliff environment, the agent learns a policy that greatly reduces the probability of falling off of the cliff by bracing against the ground and stabilize itself at the edge, as shown in Figure 2e. In the Treadmill environment, SMiRL learns a more complex locomotion behavior, jumping forward to increase the time it stays on the treadmill, as shown in Figure 2f. A quantitative measurement of the reduction in falls is shown in Figure 4.\nWe also study question (2) in the TakeCover, Cliff , Treadmill and Pedestal environments, training a VAE model and estimating surprise in the latent space of the VAE. In most of these environments, the representation learned by the VAE leads to faster acquisition of the emergent behaviors in TakeCover Figure 3 (right), Cliff Figure 4 (left), and Treadmill Figure 4 (middle), leads to a substantially more successful locomotion behavior.\nComparison to intrinsic motivation. Figure 3 shows plots of the environment-specific rewards over time on Tetris, TakeCover, and the Humanoid domains Figure 4. In order to compare SMiRL to more standard intrinsic motivation methods, which seek out states that maximize surprise or novelty, we also evaluated ICM (Pathak et al., 2017) and RND (Burda et al., 2018b). We also plot an oracle agent that directly optimizes the task reward. On Tetris, after training for 2000 epochs, SMiRL achieves near perfect play, on par with the oracle reward optimizing agent, with no deaths, as shown in Figure 3 (left, middle). ICM seeks novelty by creating more and more distinct patterns of blocks rather than clearing them, leading to deteriorating game scores over time. On TakeCover, SMiRL effectively learns to dodge fireballs thrown by the adversaries, as shown in 3 (right). Novelty-seeking ICM once again yields deteriorating rewards over time due to the method seeking novel events that correspond to damage. The baseline comparisons for the Cliff and Treadmill environments have a similar outcome. The novelty seeking behaviour of ICM causes it to learn a type of irregular behaviour that causes the agent to jump off the Cliff and roll around on the Treadmill, maximizing the variety (and quantity) of falls Figure 4.\nSMiRL and curiosity are not mutually exclusive. We show that these intrinsic reward functions can be combined to achieve better results on the Treadmill environment Figure 4(right). The combination of methods leads to increased initial learning speed and producing a walking-type gait on that task.\nExploration for SMiRL To illustrate SMiRL’s desire to explore we evaluate over an environment where the agent needs to produce long term planning behaviour. This environment is shown in Figure 2d, where the agent needs to navigate its way through the hallways, avoiding enemies, to reach a safe room through a randomly placed door. We found that SMiRL is able to solve this task. Results from these examples are shown on the accompanying website." }, { "heading": "4.2 APPLICATIONS OF SMIRL", "text": "While the central focus of this paper is the emergent behaviors that can be obtained via SMiRL, in this section we study more pragmatic applications. We show that SMiRL can be used for joint training to accelerate reward-driven learning of tasks, and also illustrate how SMiRL can be used to produce a rudimentary form of imitation learning.\nImitation. We can easily adapt SMiRL to perform imitation by initializing the buffer D0 with states from expert demonstrations, or even individual desired outcome states. To study this application of SMiRL, we initialize the buffer D0 in Tetris with user-specified desired board states. An illustration of the Tetris imitation task is presented in Figure 6, showing imitation of a box pattern (top) and a checkerboard pattern (bottom), with the leftmost frame showing the user-specified example, and the other frames showing actual states reached by the SMiRL agent. While a number of prior works have studied imitation without example actions (Liu et al., 2018; Torabi et al., 2018a; Aytar et al., 2018; Torabi et al., 2018b; Edwards et al., 2018; Lee et al.), this capability emerges automatically in SMiRL, without any further modification to the algorithm.\nSMiRL as a stability reward. In this next experiment, we study how SMiRL can accelerate acquisition of reward-driven behavior in environments that present a large number of possible actions leading to diverse but undesirable states. Such settings are common in real life: a car can crash in many different ways, a robot can drop a glass on the ground causing it to break in many ways, etc. While this is of course not the case for all tasks, many real-world tasks do require the agent to stabilize itself in a specific and relatively narrow set of conditions. Incorporating SMiRL into the learning objective in such settings can accelerate learning, and potentially improve safety during training, as the agent automatically learns to avoid anything that is unfamiliar. We study this application of SMiRL in the DefendTheLine task and the Walk task. In both cases, we use SMiRL to augment the task reward, such that the full reward is given by rcombined(s) = rtask(s) + αrSMiRL(s), where α is chosen to put the two reward terms at a similar magnitude. In the Walk task, illustrated in Figure 2g, pθ(s) is additionally initialized with 8 example walking trajectories (256 timesteps each), similarly to the imitation setting, to study how well SMiRL can incorporate prior knowledge into the stability reward (Reward + SMiRL (ours) . We include another version that is not initialized with expert data (Reward + SMiRL (no-expert) . We measure the number of falls during training, with and without\nthe SMiRL reward term. The results in Figure 5b show that adding the SMiRL reward results in significantly fewer falls during training, and less when using expert data while learning to walk well, indicating that SMiRL stabilizes the agent more quickly than the task reward alone.\nIn the DefendTheLine task, shown in Figure 2c, we compare the performance of SMiRL as a joint training objective to the more traditional novelty-driven bonus provided by ICM (Pathak et al., 2017) and RND (Burda et al., 2018b). Novelty-driven bonuses are often used to accelerate learning in domains that present an exploration challenge. However, as shown in the results in Figure 5a, the SMiRL reward, even without demonstration data, provides for substantially faster learning on this task than novelty-seeking intrinsic motivation. These results suggest that SMiRL can be a viable method for accelerating learning and reducing the amount of unsafe behavior (e.g., falling) in dynamic environments." }, { "heading": "5 RELATED WORK", "text": "Prior works have sought to learn intelligent behaviors through reinforcement learning (Sutton & Barto, 2018) with respect to a provided reward function, such as the score of a video game (Mnih et al., 2013) or a hand-defined cost function (Levine et al., 2016). Such rewards are often scarce or difficult to provide in practical real world settings, motivating approaches for reward-free learning such as empowerment (Klyubin et al., 2005; Mohamed & Jimenez Rezende, 2015) or intrinsic motivation (Chentanez et al., 2005; Oudeyer & Kaplan, 2009; Oudeyer et al., 2007). Intrinsic motivation has typically focused on encouraging novelty-seeking behaviors by maximizing model uncertainty (Houthooft et al., 2016; Still & Precup, 2012; Shyam et al., 2018; Pathak et al., 2019), by maximizing model prediction error or improvement (Lopes et al., 2012; Pathak et al., 2017), through state visitation counts (Bellemare et al., 2016), via surprise maximization (Achiam & Sastry, 2017; Schmidhuber, 1991; Sun et al., 2011), and through other novelty-based reward bonuses (Lehman & Stanley, 2011; Burda et al., 2018a; Kim et al., 2019). We do the opposite. Inspired by the free energy principle (Friston, 2009; Friston et al., 2009), we instead incentivize an agent to minimize surprise and study the resulting behaviors in dynamic, entropy-increasing environments. In such environments, which we believe are more reflective of the real-world, we find that prior noveltyseeking environments perform poorly.\nPrior works have also studied how competitive self-play and competitive, multi-agent environments can lead to complex behaviors with minimal reward information (Silver et al., 2017; Bansal et al., 2017; Sukhbaatar et al., 2017; Baker et al., 2019). Like these works, we also consider how complex behaviors can emerge in resource constrained environments. However, our approach can also be applied in non-competitive environments." }, { "heading": "6 DISCUSSION", "text": "We presented an unsupervised reinforcement learning method based on minimization of surprise. We show that surprise minimization can be used to learn a variety of behaviors that maintain “homeostasis,” putting the agent into stable and sustainable limit cycles in its environment. Across a range of tasks, these stable limit cycles correspond to useful, semantically meaningful, and complex behaviors: clearing rows in Tetris, avoiding fireballs in VizDoom, and learning to balance and hop forward with a bipedal robot. The key insight utilized by our method is that, in contrast to simple simulated domains, realistic environments exhibit dynamic phenomena that gradually increase entropy over time. An agent that resists this growth in entropy must take active and coordinated actions, thus learning increasingly complex behaviors. This stands in stark contrast to commonly proposed intrinsic exploration methods based on novelty, which instead seek to visit novel states and increase entropy.\nBesides fully unsupervised reinforcement learning, where we show that our method can give rise to intelligent and complex policies, we also illustrate several more pragmatic applications of our approach. We show that surprise minimization can provide a general-purpose risk aversion reward that, when combined with task rewards, can improve learning in environments where avoiding catastrophic (and surprising) outcomes is desirable. We also show that SMiRL can be adapted to perform a rudimentary form of imitation.\nOur investigation of surprise minimization suggests a number of directions for future work. The particular behavior of a surprise minimizing agent is strongly influenced by the particular choice of state representation: by including or excluding particular observation modalities, the agent will be more or less surprised. Thus, tasks may potentially be designed by choosing appropriate state or observation representations. Exploring this direction may lead to new ways of specifying behaviors for RL agents without explicit reward design. Other pragmatic applications of surprise minimization may also be explored in future work, including its effects for mitigating reward misspecification, by disincentivizing any unusual behavior that likely deviates from what the reward designer intended. Finally, we believe that a promising direction for future research is to study how lifelong surprise minimization can result in intelligent and sophisticated behavior that maintains homeostasis by acquiring increasingly complex behaviors. This may be particularly relevant in complex real-world environments populated by other intelligent agents, where maintaining homeostasis may require constant adaptation and exploration." } ]
2,019
null
SP:8bdeb36997d6699e48511d9abac87df8c14bd087
[ "In this paper, a tensor decomposition method is studied for link prediction problems. The model is based on Tucker decomposition but the core tensor is decomposed as CP decomposition so that it can be seen as an interpolation between Tucker and CP. The performance is evaluated with several NLP data sets (e.g., subject-verb-object triplets). ", "The paper introduces a novel tensor decomposition that is reminiscent of canonical decomposition (CP) with low-rank factors, based on the observation that the core tensor in Tucker decomposition can be decomposed, resulting in a model interpolating between CP and Tucker. The authors argue that a straight application of AdaGrad on this decomposition is inadequate, and propose Ada^{imp} algorithm that enforces rotation invariance of the gradient update. The new decomposition is applied to ComplEx model (called PComplEx) that demonstrates better performance than the baseline." ]
The leading approaches to tensor completion and link prediction are based on the canonical polyadic (CP) decomposition of tensors. While these approaches were originally motivated by low rank approximations, the best performances are usually obtained for ranks as high as permitted by computation constraints. For large scale factorization problems where the factor dimensions have to be kept small, the performances of these approaches tend to drop drastically. The other main tensor factorization model, Tucker decomposition, is more flexible than CP for fixed factor dimensions, so we expect Tucker-based approaches to yield better performance under strong constraints on the number of parameters. However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, (Trouillon et al., 2016), a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters. In a control experiment, we show that one problem in the practical application of Tucker decomposition to large-scale tensor completion comes from the adaptive optimization algorithms based on diagonal rescaling, such as Adagrad. We present a new algorithm for a constrained version of Tucker which implicitly applies Adagrad to a CP-based model with an additional projection of the embeddings onto a fixed lower dimensional subspace. The resulting Tucker-style extension of ComplEx obtains similar best performances as ComplEx, with substantial gains on some datasets under constraints on the number of parameters.
[]
[ { "authors": [ "Ivana Balažević", "Carl Allen", "Timothy Hospedales" ], "title": "Multi-relational poincar\\’e graph embeddings", "venue": "arXiv preprint arXiv:1905.09791,", "year": 2019 }, { "authors": [ "Ivana Balažević", "Carl Allen", "Timothy M Hospedales" ], "title": "Tucker: Tensor factorization for knowledge graph completion", "venue": "arXiv preprint arXiv:1901.09590,", "year": 2019 }, { "authors": [ "Antoine Bordes", "Jason Weston", "Ronan Collobert", "Yoshua Bengio" ], "title": "Learning structured embeddings of knowledge bases", "venue": "In Conference on artificial intelligence,", "year": 2011 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating Embeddings for Modeling Multi-relational Data", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Rasmus Bro", "Claus A Andersson" ], "title": "Improving the speed of multiway algorithms: Part ii: Compression", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1998 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In International Conference on Learning Representations (ICLR2014),", "year": 2014 }, { "authors": [ "J Douglas Carroll", "Sandra Pruzansky", "Joseph B Kruskal" ], "title": "Candelinc: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters", "venue": null, "year": 1980 }, { "authors": [ "Tim Dettmers", "Pasquale Minervini", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Convolutional 2D knowledge graph embeddings", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Katsuhiko Hayashi", "Masashi Shimbo" ], "title": "On the equivalence of holographic and complex embeddings for link prediction", "venue": "arXiv preprint arXiv:1702.05563,", "year": 2017 }, { "authors": [ "Frank L. Hitchcock" ], "title": "The expression of a tensor or a polyadic as a sum of products", "venue": "Studies in Applied Mathematics,", "year": 1927 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Rodolphe Jenatton", "Nicolas L Roux", "Antoine Bordes", "Guillaume R Obozinski" ], "title": "A latent factor model for highly multi-relational data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Rudolf Kadlec", "Ondrej Bajgar", "Jan Kleindienst" ], "title": "Knowledge base completion: Baselines strike back", "venue": "In Proceedings of the 2nd Workshop on Representation Learning for NLP,", "year": 2017 }, { "authors": [ "Seyed Mehran Kazemi", "David Poole" ], "title": "Simple embedding for link prediction in knowledge graphs", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Tamara G. Kolda", "Brett W. Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "Timothée Lacroix", "Nicolas Usunier", "Guillaume Obozinski" ], "title": "Canonical tensor decomposition for knowledge base completion", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML-18),", "year": 2018 }, { "authors": [ "Adam Lerer", "Ledell Wu", "Jiajun Shen", "Timothee Lacroix", "Luca Wehrstedt", "Abhijit Bose", "Alex Peysakhovich" ], "title": "Pytorch-biggraph: A large-scale graph embedding system", "venue": null, "year": 1903 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Dat Quoc Nguyen" ], "title": "An overview of embedding models of entities and relationships for knowledge base completion", "venue": "arXiv preprint arXiv:1703.08098,", "year": 2017 }, { "authors": [ "Maximilian Nickel", "Volker Tresp", "Hans-Peter Kriegel" ], "title": "A three-way model for collective learning on multi-relational data", "venue": "In Proceedings of the 28th International Conference on Machine Learning", "year": 2011 }, { "authors": [ "Maximilian Nickel", "Kevin Murphy", "Volker Tresp", "Evgeniy Gabrilovich" ], "title": "A Review of Relational Machine Learning for Knowledge Graphs", "venue": "Proceedings of the IEEE,", "year": 2016 }, { "authors": [ "Maximilian Nickel", "Lorenzo Rosasco", "Tomaso A Poggio" ], "title": "Holographic embeddings of knowledge graphs. 2016b", "venue": null, "year": 2016 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne van den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Kristina Toutanova", "Danqi Chen" ], "title": "Observed versus latent features for knowledge base and text inference", "venue": "In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality,", "year": 2015 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yanjie Wang", "Samuel Broscheit", "Rainer Gemulla" ], "title": "A relational tucker decomposition for multirelational link prediction", "venue": "arXiv preprint arXiv:1902.00898,", "year": 2019 }, { "authors": [ "Yexiang Xue", "Yang Yuan", "Zhitian Xu", "Ashish Sabharwal" ], "title": "Expanding holographic embeddings for knowledge completion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bishan Yang", "Wen-tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge bases", "venue": "arXiv preprint arXiv:1412.6575,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "The problems of representation learning and link prediction in multi-relational data can be formulated as a binary tensor completion problem, where the tensor is obtained by stacking the adjacency matrices of every relations between entities. This tensor can then be intrepreted as a \"knowledge base\", and contains triples (subject, predicate, object) representing facts about the world. Link prediction in knowledge bases aims at automatically discovering missing facts (Bordes et al., 2011; Nickel et al., 2011; Bordes et al., 2013; Nickel et al., 2016a; Nguyen, 2017).\nState of the art methods use the canonical polyadic (CP) decomposition of tensors (Hitchcock, 1927) or variants of it (Trouillon et al., 2016; Kazemi & Poole, 2018; Lacroix et al., 2018). While initially motivated by low-rank assumptions on the underlying ground-truth tensor, the best performances are obtained by setting the rank as high as permitted by computational constraints, using tensor norms for regularization (Lacroix et al., 2018). However, for large scale data where computational or memory constraints require ranks to be low (Lerer et al., 2019), performances drop drastically.\nTucker decomposition is another multilinear model which allows richer interactions between entities and predicate vectors. A special case of Tucker decomposition is RESCAL (Nickel et al., 2011), in which the relations are represented by matrices and entities factors are shared for subjects and objects. However, an evaluation of this model in Nickel et al. (2016b) shows that RESCAL lags behind other methods on several benchmarks of interest. Recent work have obtained more competitive results with similar models (Balažević et al., 2019b; Wang et al., 2019), using different regularizers or deep learning heuristics such as dropout and label smoothing. Despite these recent efforts, learning Tucker decompositions remains mostly unresolved. Wang et al. (2019) does not achieve state of the art results on standard benchmarks, and we show (see Figure 3) that the performances reported by\nBalažević et al. (2019b) are actually matched by ComplEx (Trouillon et al., 2016; Lacroix et al., 2018) optimized with Adam, which has less hyperparameters.\nIn this work, we overcome some of the difficulties associated with learning a Tucker model for knowledge base completion. Balažević et al. (2019b) use deep-learning mechanisms such as batch normalization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014) or learning-rate annealing to address both regularization and optimization issues. Our approach is different: We factorize the core tensor of the Tucker decomposition with CP to obtain a formulation which is closer to CP and better understand what difficulties appear. This yields a simple approach, which has a single regularization hyperparameter to tune for a fixed model specification.\nThe main novelty of our approach is a more careful application of adaptive gradient techniques. State-of-the-art methods for tensor completion use optimization algorithms with adaptive diagonal rescaling such as Adagrad (Duchi et al., 2011) or Adam (Kingma & Ba, 2014). Through control experiments in which our model is equivalent to CP up to a fixed rotation of the embeddings, we show that one of the difficulties in training Tucker-style decompositions can be attributed to the lack of invariance to rotation of the diagonal rescaling. Focusing on Adagrad, we propose a different update rule that is equivalent to implicitely applying Adagrad to a CP model with a projection of the embedding to a lower dimensional subspace.\nCombining the Tucker formulation and the implicit Adagrad update, we obtain performances that match state-of-the-art methods on the standard benchmarks and achieve significanly better results for small embedding sizes on several datasets. Compared to the best current algorithm for Tucker decomposition of Balažević et al. (2019b), our approach has less hyperparameters, and we effectively report better performances than the implementation of ComplEx of Lacroix et al. (2018) in the regime of small embedding dimension.\nWe discuss the related work in the next section. In Section 3, we present a variant of the Tucker decomposition which allows to interpolate between Tucker and CP. The extreme case of this variant, which is equivalent to CP up to a fixed rotation of the embedding, serves as control model to highlight the deficiency of the diagonal rescaling of Adagrad for Tucker-style decompositions in experiments reported in Section 4 . We present the modified version of Adagrad in Section 5 and present experimental results on standard benchmarks of knowledge base completion in Section 7." }, { "heading": "2 LINK PREDICTION IN KNOWLEDGE BASES", "text": "Notation Tensors and matrices are denoted by uppercase letters. For a matrix U , ui is the vector corresponding to the i-th row of U . The tensor product is written ⊗ and the Hadamard product (i.e., elementwise product) is written ⊙." }, { "heading": "2.1 LEARNING SETUP", "text": "A knowledge base consists of a set S of triples (subject, predicate, object) that represent (true) known facts. The goal of link prediction is to recover facts that are true but not in the database. The data is represented as a tensor X̃ ∈ {0, 1}N×L×N for N the number of entities and L the number of predicates. Given a training set of triples, the goal is to provide a ranking of entities for queries of the type (subject, predicate, ?) and (?, predicate, object). Following Lacroix et al. (2018), we use the cross-entropy as a surrogate of the ranking loss. As proposed by Lacroix et al. (2018) and Kazemi & Poole (2018), we include reciprocal predicates: for each predicate P in the original dataset, and given an item o, each query of the form (?, P , o) is reformulated as a query (o, P−1, ?), where o is now the subject of P−1. This doubles the effective number of predicates but reduces the problem to queries of the type (subject, predicate, ?) only.\nFor a given triple (i, j, k) ∈ S, the training loss function for a tensor X is then\nℓi,j,k(X) = −Xi,j,k + log ( ∑\nk′ ̸=k\nexp(Xi,j,k′) ) . (1)\nFor a tensor decomposition model X(θ) parameterized by θ, the parameters θ̂ are found by minimizing the regularized empirical risk with regularizer Λ:\nθ̂ = argmin θ L(θ) = argmin θ\n1 |S| ∑\n(i,j,k)∈S\nℓi,j,k(X(θ)) + νΛ(θ). (2)\nThis work studies specific models for X(θ), inspired by CP and Tucker decomposition. We discuss the related work on tensor decompositions and link prediction in knowledge bases below." }, { "heading": "2.2 RELATED WORK", "text": "" }, { "heading": "2.2.1 CANONICAL DECOMPOSITION AND ITS VARIANTS", "text": "The canonical polyadic (CP) decomposition of a tensor X is defined entrywise by\n∀i, j, k, Xi,j,k = ⟨ui, vj , wk⟩ := d∑\nr=1\nuirvjrwkr.\nThe smallest value of d for which this decomposition exists is the rank of X . Each element Xi,j,k is thus represented as a multi-linear product of the 3 embeddings in Rd associated respectively to the ith subject, the jth predictate and the kth object.\nCP currently achieves near state-of-the-art performances on standard benchmarks of knowledge base completion (Kazemi & Poole, 2018; Lacroix et al., 2018). Nonetheless, the best reported results are with the ComplEx model (Trouillon et al., 2016), which learns complex-valued embeddings and sets the embeddings of the objects to be the complex conjugate of the embeddings of subjects, i.e., wk = ūk. Prior to ComplEx, Dismult was proposed (Yang et al., 2014) as a variant of CP with wk = uk. While this model obtained good performances (Kadlec et al., 2017), it can only model symmetric relations and does not perform as well as ComplEx. CP-based models are optimized with vanilla Adam or Adagrad and a single regularization parameter (Trouillon et al., 2016; Kadlec et al., 2017; Lacroix et al., 2018) and do not require additional heuristics for training." }, { "heading": "2.2.2 TUCKER DECOMPOSITION AND ITS VARIANTS", "text": "Given a tensor X of size N×L×N , the Tucker decomposition of X is defined entrywise by\n∀i, j, k, Xi,j,k = ⟨ui ⊗ vj ⊗ wk, C⟩ := d1∑\nr1=1 d2∑ r2=1 d3∑ r3=1 Cr1,r2,r3uir1vjr2wkr3 .\nThe triple (d1, d2, d3) are the rank parameters of the decomposition. We also use a multilinear product notation X = [[C;U, V,W ]], where U, V,W are the matrices whose rows are respectively uj , vk, wl and C the three dimensional d1 × d2 × d3 core tensor. Note that the CP decomposition is a Tucker decomposition in which d1 = d2 = d3 = d and C is the identity, which we write [[U, V,W ]]. With a non-trivial core tensor, Tucker decomposition is thus more flexible than CP for fixed embedding size. In knowledge base applications, we typically have d ≤ L ≪ N , so the vast majority of the model parameters are in the embedding matrices of the entities U and W . When constraints on the number of model parameters arise (e.g., memory constraints), Tucker models appear as natural candidates to increase the expressivity of the decomposition compared to CP with limited impact on the total number of parameters.\nWhile many variants of the Tucker decomposition have been proposed in the literature on tensor factorization (see e.g., Kolda & Bader, 2009), the first approach based on Tucker for link prediction in knowledge bases is RESCAL (Nickel et al., 2011). RESCAL uses a special form of Tucker decomposition in which the object and subject embeddings are shared, i.e., U = W , and it does not compress the relation matrices. In the multilinear product notation above, a RESCAL model is thus written as X = [[C;U, I, U ]]. Despite some success on a few smaller datasets, RESCAL performances drop on larger datasets (Nickel et al., 2016b). This decrease in performances has been attributed either to improper regularization (Nickel et al., 2011) or optimization issues (Xue et al., 2018). Balažević et al. (2019b) revisits Tucker decomposition in the context of large-scale knowledge bases and resolves some of the optimization and regularization issues using learning rate\nannealing, batch-normalization and dropout. It comes at the price of more hyperparameters to tune for each dataset (label smoothing, three different dropouts and a learning rate decay), and as we discuss in our experiments, the results they report are not better than ComplEx for the same number of parameters.\nTwo methods were previously proposed to interpolate between the expressivity of RESCAL and CP. Xue et al. (2018) expands the HolE model (Nickel et al., 2016b) (and thus the ComplEx model (Hayashi & Shimbo, 2017)) based on cross-correlation of embeddings to close the gap in expressivity with the Tucker decomposition for a fixed embedding size. Jenatton et al. (2012) express the relation matrices in RESCAL as low-rank combination of a family of matrices. We describe the link between these approaches and ours in Appendix 9.4. None of these approach however studied the effect of their formulation on optimization, and reported results inferior to ours." }, { "heading": "2.2.3 OTHER APPROACHES", "text": "(Graph) neural networks for link prediction Several methods have introduced models that go beyond the form of Tucker and canonical decompositions. ConvE (Dettmers et al., 2018) uses a convolution on a 2D tiling of the subject and relation embeddings as input to a 2-layer neural net that produces a new embedding for the pair, then compares to the object embedding. Graph neural networks (Scarselli et al., 2009; Niepert et al., 2016; Li et al., 2016; Bruna et al., 2014) have recently gained popularity and have been applied to link prediction in knowledge bases by Schlichtkrull et al. (2018). This model uses a graph convolutional architecture to generate a variant of CP.\nPoincaré embeddings Poincaré embeddings have been proposed as an alternative to usual tensor decomposition approaches to learn smaller embeddings when the relations are hierarchical (Nickel & Kiela, 2017). The method has recently been extended to link prediction in relational data with very good performance trade-offs for small dimensional embeddings on the benchmark using WordNet (Balažević et al., 2019a), which contains relationships such as hypernyms and hyponyms which are purely hierarchical. However, such good results do not extend to other benchmarks." }, { "heading": "3 INTERPOLATING BETWEEN CP AND TUCKER", "text": "In order to better understand the underlying difficulties in learning (variants of) Tucker decompositions compared to CP, our analysis starts from a Tucker model in which the core tensor is itself decomposed with CP. Given a N × L × N tensor, a fixed d and assuming a (d, d, d) Tucker decomposition to simplify notation, a Tucker model where the core tensor is itself decomposed with a rank-D CP can be written as (details are given in Appendix 9.3):\nXijk = ⟨ui ⊗ vj ⊗ wk, C⟩ = ⟨P1ui, P2vj , P3wk⟩ or equivalently X = [[UP⊤1 , V P⊤2 ,WP⊤3 ]],\nwhere P1, P2, P3 are all D × d matrices. Since most knowledge bases have much fewer predicates than entities (L≪ N ), the dimension of the predictate factors has little impact on the overall number of model parameters. So in the remainder of the paper, we always consider P2 = I . Learning matrices U, V,W,P1, P3 of this decomposition simultaneously leads to the following model, which we call CP-Tucker (CPT):\n(CPT)Xijk = ⟨P1ui, vj , P3wk⟩, ui, wk ∈ Rd, vj ∈ RD, Pi ∈ RD×d.\nThe CPT model is similar to a CP model except that the embedding matrices U and W have an additional low-rank constraint (d instead of D). We say that the model interpolates between CP and Tucker because for D = d it is equivalent to CP (as long as P1 and P3 are full rank), whereas for D = d2 we recover a full Tucker model because the matrices P1 and P3 can be chosen such that ⟨P1ui, vj , P3wk⟩ = uiMat(vj)vTj , where Mat is the operator that maps a d2 vector to a d×d matrix (see Appendix 9.5).\nCPT is similar to CANDELINC (Carroll et al., 1980), except that in CANDELINC the factors U , V and W are fixed and used to compress the data in order to efficiently learn the Pi. Closer to CPT, Bro & Andersson (1998) first learn a Tucker3 decomposition of X before applying CANDELINC using the learned factors. These methods are only applicable to least-square estimation, and for tensors of smaller scale than knowledge bases.\nFixed projection matrices: The Projected Canonical Polyadic (PCP) Decomposition In order to clarify the difficulty that arise when learning a CPT model compared to a CP model, we study a simpler model in which the matrices P1 and P3 are not learned but rather fixed during training and taken as random matrices with orthonormal columns. We call the resulting model the Projected Canonical Polyadic (PCP) decomposition, since P1, P3 project the embeddings of dimension d into a higher dimension D:\n(PCP)Xijk = ⟨P1ui, vj , P3wk⟩, ui, wk ∈ Rd, vj ∈ RD, fixed P1, P3 ∈ RD×d\nWhen D = d the matrices Pi are then fixed unitary transformations. The PCP (or CPT) model in this case D = d is then equivalent to a CP model, up to a fixed invertible transformation of the embeddings. The capacity of the model grows beyond that of CP as D increases up to d2." }, { "heading": "4 MOTIVATION: OPTIMIZATION ISSUES WITH CPT AND PCP", "text": "As discussed in the related works, previous results suggest that Tucker models are more difficult to train than CP models. The goal of this section is to isolate an issue faced with CPT/PCP models when trained with vanilla adaptive gradient methods such as Adagrad or Adam." }, { "heading": "4.1 CONTROL EXPERIMENT: UNITARY P1 AND P3 IN PCP", "text": "When D = d in PCP, the model becomes equivalent to CP. Indeed, the matrices P1 and P3 are unitary (P1P⊤1 = P3P ⊤ 3 = I) and so [[(UP1)P ⊤ 1 , V, (WP3)P ⊤ 3 ]] = [[U, V,W ]]. There is no practical interest in considering this degenerate case of PCP, we only use it in the following toy experiment to exhibit one of the difficulties encountered when training PCP.\nWe perform a simple control experiment in which we take one of the standard benchmarks of link prediction in knowledge bases, called FB15K-237, and train a CP model for different values of the rank D and a PCP model with D = d with vanilla Adagrad. The full experimental protocol, including hyperparameter tuning, is similar to our main experiments and is described in Section 7.2. Figure 1a plots the performances in terms of the standard metric mean reciprocal rank (higher is better) as a function of D of CP (blue curve) and PCP (red curve, called PCP (Adagrad)).\nWe observe that CP obtains significantly better performances than CPT for larger embedding dimension D. Since in this toy experiment CP and PCP can represent exactly the same tensors and have equivalent regularizations, the only difference between the algorithms that can explain the difference in performances is in how the optimization is carried out, namely the diagonal rescaling performed by Adagrad: Adagrad adapts the learning rate on a per-parameter basis, depending on previous and\ncurrent gradients, and is therefore not invariant by the addition of the matrices P1 and P2 even if these are unitary (we provide the formal justification in the next section). This is shown experimentally in Figure 1b where we plot the average Adagrad coefficients for each embedding dimensions (i.e., adagrad coefficients of subject/object embedding matrices averaged by column). The addition of the random P1 and P2 flattens the Adagrad weights, which in turn removes all the benefit of the adaptive rescaling of the algorithm.\nFor reference, we also tried to directly learn all parameters including P1 and P3 (i.e., learn a CPT model) with vanilla Adagrad. The performances obtained are also lower than those of CP, as shown in Figure 1a (orange curve).\n5 A ROTATION INVARIANT ADAGRAD: ADAimp\nIn this section, we study the optimization problem in more details, and more precisely the effect of the diagonal rescaling performed by Adagrad. As a remainder, given a sequence of stochastic gradients g(t) of L and denoting G(t) = ϵI + ∑t τ=1 g (τ)g(τ) ⊤ , the (practical) AdaGrad update is:\nθ(t+1)p = θ (t) p − ηg(t)p /\n√ G (t) pp or equivalently θ(t+1) = θ(t) − η Diag(G(t))−1/2\nwhere Diag(G) is the diagonal matrix obtained by extracting the diagonal elements of G." }, { "heading": "5.1 TWO EQUIVALENT PARAMETRIZATIONS OF PCP", "text": "The following decomposition is equivalent to PCP, but its embeddings are expressed in RD:\n(PCPFULL)Xi,j,k = ⟨P1P⊤1 ui, vj , P3P⊤3 wk⟩, with ui, vj , wk ∈ RD, fixed P1, P2 ∈ RD×d.\nNote that with ui = P ⊤ 1 ui and wk = P ⊤ 3 wk, we go from PCPfull to PCP. The practical differences between PCP and PCPfull are that PCPfull learn embeddings in the high-dimensional space, maintaining the low-rank structure of the overall entity embeddings through the orthogonal projections P1P T 1 and P3P T 3 . The practical interest of PCPfull is not in terms of modeling but rather from the optimization perspective with Adagrad because it has a structure that is closer to that of CP.\nIndeed, for d = D, P1 and P3 disappear in PCPfull, so that optimizing a PCPfull model with Adagrad is equivalent to optimizing a CP model with Adagrad. This property suggests an alternative to the vanilla PCP + Adagrad algorithm, which we call implicit Adagrad:\nImplicit Adagrad: Adaimp The approach we propose is to effectively optimize PCPfull with Adagrad. However, when d < D, which is the interesting case for PCP, we notice that we do not need to maintain embeddings in RD. Our approach, called Adaimp, computes the gradients and Adagrad coefficients with respect to ui, wk ∈ RD, but the full dimension factor matrices U and W are never explicitly stored in memory. Rather, we store ui = P ⊤ 1 ui and wk = P ⊤ 3 wk ∈ Rd, which is all that is required for any model computation in PCPfull since P1 and P3 are fixed. Overall, the effective model parameters are exactly the same as in PCP, and we call this approach PCP +Adaimp.\nAn Adaimp update is described in Algorithm 4. While PCP +Adagrad and PCP +Adaimp work with the same number of model parameters, the fundamental difference is the computation of Adagrad coefficients. Since Adaimp effectively applies Adagrad to PCPfull, we need to maintain the Adagrad coefficients in RD even when d < D: the overall update is first computed in RD and projected back to Rd after the application of Adagrad rescaling. In constrat, in vanilla PCP +Adagrad, the gradient is projected to Rd before Adagrad rescaling.\n5.2 IMPLICIT OPTIMIZATION OF PCPFULL : ADAimp\nIn this section, we discuss more formally the Adaimp updates, and how they compare to PCP +Adagrad. In the following, Ũ , W̃ are in RN×d, whereas U, V and W are in RN×D. Using d ≤ D, P1, P3 ∈ RD×d, and we use the notation Π1 = P1P⊤1 and Π3 = P3P⊤3 .\nThe empirical risk L can be expressed as a function of three matrices M (1), M (2) and M (3) corresponding to factors of a CP decomposition. We focus on the following problems :\n(PCP) argmin Ũ,V,W̃ L(ŨP⊤1 , V, W̃P⊤3 ) (PCPFULL) argmin U,V,W L(UΠ⊤1 , V,WΠ⊤3 )\nWe focus on a step at time (t) on vectors ũi and ui. We assume that at this time t, the tensor iterates are the same, that is Ũ = UP⊤1 (resp. W̃ = WP ⊤ 1 ) so that [[ŨP ⊤ 1 , V, W̃P ⊤ 3 ]] = [[UΠ ⊤ 1 , V,WΠ ⊤ 3 ]]. In this case, the gradient ∇ M (1) i L is the same in both optimization problems, we denote it by g(t)i .\nLet G(t)i = ϵId + ∑t τ=1 g (τ) i g (τ) i ⊤ . The updates for (PCP) are:\nũt+1i = ũ t i − ηDiag(P⊤1 G (t) i P1) −1/2P⊤1 g (t) i . (3)\nNote that due to the presence of P1 inside the Diag operator, the update (3) is not rotation invariant. Moreover, for random P1, the matrix P⊤1 G (t) i P1 will be far from diagonal with high probability, making its diagonal meaningless. This is visualized in Figure 1b.\nSimilar updates can be derived for (PCPfull):\nut+1i = u t i − ηDiag(Π⊤1 G (t) i Π1) −1/2Π⊤1 g (t) i . (4)\nAs a sanity check, clearly, for d = D and Π1 = I , the update (4) is equivalent to the Adagrad update for the CP model. In the general case d ≤ D, in order to avoid storing U ∈ RN×D, we apply these updates implicitly with Adaimp, by storing u(t)i = P ⊤ 1 u (t) i in Rd. Let us compare the updates :\n(ADAGRAD) ũt+1i = ũ t i − η × Adagrad in Rd︷ ︸︸ ︷ Diag(P⊤1 G (t) i P1) −1/2 × projection to Rd︷ ︸︸ ︷ P⊤1 g (t) i\n(ADAimp) ut+1i = u t i − η × P⊤1︸︷︷︸\nprojection to Rd\n× Diag(Π⊤1 G (t) i Π1) −1/2︸ ︷︷ ︸ Adagrad in RD\n× Π⊤1 g (t) i︸ ︷︷ ︸\nprojection to Im(Π)∈RD\nGoing back to our control experiment, we note on Figure 1a that PCP +Adaimp matches the performances of CP+Adagrad for all D, indicating that we fixed this optimization issue.\n5.3 ALTERNATIVES TO ADAimp\nAnother solution would be to use Adagrad projected on the column space of Π, but we show in Appendix 9.1 that even with the diagonal approximation, this is impractical. Note that the version of Adagrad which uses the full matrix G(t) is rotation invariant (see Appendix 9.2 for details), but it cannot be used at the scale of our problems.\nIt could be argued that the strength of the AdaGrad algorithm in our context mostly comes from its adaptation to the different frequencies of updates of each embedding. In fact, this is one of the examples chosen in Duchi et al. (2011) to display the advantages of AdaGrad compared to stochastic gradient descent. A version of AdaGrad that would only keep one coefficient per embedding (we call this version Adarow) would be invariant to unitary transforms by design and would adapt to the various update frequencies of different embeddings. In fact, this version of AdaGrad is used to save memory in Lerer et al. (2019). We test this algorithm in Appendix 9.8. The difference in performances shows that this adaptation is not sufficient to recover the performances of the finer diagonal AdaGrad in our setting. The claim that the approximations made in Adaimp are indeed better is further backed by the experiments in the next section ." }, { "heading": "5.4 COMPLEXITY", "text": "The time complexity of our Adaimp update for a batch of size B is O(D · d · B) which is similar, up to constants, to the complexity of updates for the AdaGrad algorithm. We do not notice any runtime differences between our algorithm applied in dimensions (d,D) and a CP decomposition of dimension D (see Section 7). The runtime for large enough D is dominated by the matrix product (O(D2 ·B)) required to compute the cross-entropy in Equation (1).\nAlgorithm 1 Step of PComplEx training (i, j, k)← sample from S gi, gj , gk ← gradients in (Pui, Puj , wk) ui, G̃i ← Adaimp (η, ui, gu, G̃i, P ) uj , G̃j ← Adaimp (η, uj , gj , G̃j , P ) wk, Gk ← AdaGrad(η, wk, gk, Gk)\nAlgorithm 2 Adaimp\nInput: η, x(t), g(t), G̃(t−1), P g̃(t) ← P (P⊤g(t)) G̃(t) ← G̃(t−1) + g̃(t) ⊙ g̃(t)\nx(t+1) ← x(t) − ηP⊤Diag(G̃(t))−1/2g̃(t) return x(t+1), G̃(t)\nFigure 2: The two algorithms used in the training of PComplEx" }, { "heading": "6 PROJECTED COMPLEX", "text": "As the state-of-the-art variant of CP is ComplEx (Trouillon et al., 2016; Lacroix et al., 2018), we propose the following alternative to PCP base on ComplEx with Adaimp in practice. Given the ComplEx decomposition X = Re([U, V, U ]), a low-rank decomposition of the entity factor U as PŨ leads to the model PComplEx we use in the experiments of Section 7:\n(PCOMPLEX)Xijk = ⟨Pui, vj , Puk⟩ = ⟨ui, P⊤Diag(vj)P, uk⟩, ui, wk ∈ Cd, vj ∈ CD, fixed P ∈ RD×d\nPComplEx is similar to ComplEx but with interactions described by full matrices of rank D that share a same basis. We learn this decomposition with Algorithms 1 and 4." }, { "heading": "7 EXPERIMENTS", "text": "In this section, we compare ComplEx optimized with AdaGrad and PComplEx optimized with Adaimp. We optimize the regularized empirical risk of Equation (2). Following Lacroix et al. (2018), we regularize ComplEx with the weighted nuclear-3 norm, which is equivalent to regularizing ∥ui∥33 + ∥uj∥33 + ∥wk∥33 for each training example (i, j, k). For PComplEx based models, we regularize ∥Pui∥33 + ∥vj∥33 + ∥Puk∥33 by analogy.\nWe conduct all experiments on a Quadro GP 100 GPU. The code for PComplEx and Adaimp is available in the supplementary materials, experiments on ComplEx use the code1 from (Lacroix et al., 2018). We include results from TuckER (Balažević et al., 2019b), DRT and SRT which are the two models considered in Wang et al. (2019), ConvE (Dettmers et al., 2018), HolEx (Xue et al., 2018), LFM (Jenatton et al., 2012) and MurP (Balažević et al., 2019a) without re-implementation on our parts. All the parameters for the experiments in this section are reported in Appendix 9.11." }, { "heading": "7.1 DATASETS", "text": "WN18 (Bordes et al., 2013) is taken from the Wordnet database which contains words and relations between them. WN18RR (Dettmers et al., 2018) is a filtering of this dataset which removes train/test leakage. YAGO3-10 (Dettmers et al., 2018) is taken from the eponymous knowledge base. Finally, SVO (Jenatton et al., 2012) contains observations of Subject, Verb, Object triples. All statistics for these datasets can be found in Appendix 9.13. Experiments on the FB15K and FB15K-237 datasets are deferred to Appendix 9.10." }, { "heading": "7.2 RESULTS", "text": "We report the filtered Mean Reciprocal Rank (Nickel et al., 2016b) on Figure 3. For SVO, we report the only figure available in previous work which is the filtered hits at 5% (Jenatton et al., 2012). These measures are detailed in Appendix 9.11. Only the grid-search parameters were given for LFM, so we were not able to obtain a precise number of parameters for the number they report.\n1available at https://github.com/facebookresearch/kbc\nOn WN18, SVO and YAGO3-10 we observe sizable performance gains for low embedding sizes : up to 0.14 MRR points on WN18, 0.05 MRR points on YAGO and 0.03 H@5% points on SVO.\nThe TuckER (Balažević et al., 2019b) model performs similarly to PComplEx and ComplEx except on FB15K and WN18 where it underperforms (see Appendix 9.10). We expect this discrepancy to come from a less extensive grid-search rather than any intrinsic differences in the models that are both based on the Tucker decomposition. The consistency on all operating points of our method with ComplEx shows an advantage of our method, which enjoys the same learning rate robustness as AdaGrad, and does not require choosing a learning-rate decay, leading to easier experiments with only the regularization strength to tune. The MurP model (Balažević et al., 2019a) provides good performances for low embedding sizes on WN18RR, but underperforms on FB15K-237 (see Appendix 9.10). All other models fail to match the performances of ComplEx and PComplEx with equivalent number of parameters.\nVariance of performances in PComplEx due to random choice of P is similar to the variance of ComplEx. We present experiments on the WN18 dataset for 5 different seeds in Appendix 9.9." }, { "heading": "8 CONCLUSION", "text": "By observing that the core tensor of the Tucker decomposition can itself be decomposed, we obtain new models that are reminiscent of the canonical decomposition with low-rank factors. We provide experimental evidence that a naive application of AdaGrad on this decomposition fails, due to individual coordinates losing their meaning. We propose a new algorithm, Adaimp, which fixes this issue. Our model, when optimized with Adaimp, provides better performances than ComplEx in the low-rank regime, and matches its performances in the other regimes." }, { "heading": "9 APPENDIX", "text": "In subsections 9.1 and 9.2 where we discuss the Adagrad algorithm, we do so in a general setting, where we study, for fixed P with orthonormal columns, the problem :\nmin θ f(Pθ)." }, { "heading": "9.1 PROJECTED ADAGRAD UPDATE", "text": "Let D denote Diag(G)1/2 for the classical version of Adagrad and G1/2 itself for the “full” version of the update. When the parameter θ is constrained to a set Θ, the update proposed in Eq.(1) in Duchi et al. (2011) the one obtained by solving\nmin θ∈Θ\n(θ̃ − z)⊤D(θ̃ − z) with z = θ̃(t) − ηD−1g(t)\nTo enforce the constraint that θ̃ = Pθ, we can consider the Lagrangian\nL(θ̃, θ;λ) = (θ̃ − z)⊤D(θ̃ − z)− λ⊤(θ̃ − Pθ)\nwhose stationary points satisfy D(θ̃ − z) = λ and P⊤λ = 0. So this entails P⊤D(θ̃ − θ̃(t)) = ηP⊤g(t) and finally using θ̃ = Pθ we obtain an update in θ as follows\nθ(t+1) = θ(t) − η(P⊤DP⊤)−1Pg(t). Clearly, PDP⊤ is in general non-diagonal whether D is diagonal or not, and so this approach does not provide a computationally efficient update.\nIf D = G1/2, then since PG1/2P⊤ = (PGP⊤)1/2 the update is the same as the full Adagrad update (5) that we derive in the following section and replacing (PGP⊤)1/2 by its diagonal approximation recovers update (3)." }, { "heading": "9.2 THE TWO FULL ADAGRAD UPDATES AND THE QUALITY OF APPROXIMATIONS OF THE", "text": "DIAG VERSIONS\nIf we consider the full versions of the Adagrad updates then (letting again Π = PP⊤) its application to θ 7→ f(Pθ) and θ̃ 7→ f(Πθ̃) yield respectively\nθ̃(t+1) = θ̃(t) − η P ( P⊤G(t)P )−1/2 P⊤g(t) and (5)\nθ̃(t+1) = θ̃(t) − ηΠ ( ΠG(t) Π )−†/2 Π g(t), (6)\nwhere M† notes the pseudo-inverse of a matrix M . As it turns out, the two updates are equivalent:\nIndeed, first (ΠGΠ)1/2 = P (P⊤GP )1/2P⊤ because P⊤P = I implies that\nP (P⊤GP )1/2P⊤P (P⊤GP )1/2P⊤ = ΠGΠ,\nand the p.s.d. squareroot is unique. Second, taking the pseudo-inverse of this identity, we have (ΠGΠ)†/2 = ( P (P⊤GP )1/2P⊤ )† = P (P⊤GP )−1/2P⊤. (7)\nbecause, if H = (P⊤GP )1/2 is an invertible matrix, then (PHP⊤)† = PH−1P⊤ given that PHP⊤ PH−1P⊤ = PP⊤. Finally multiplying both sides of Eq. (7) by Π shows that\nΠ(ΠG(t)Π)†/2Π = P (P⊤G(t)P )−1/2P⊤.\nThis shows that although Adagrad is not invariant for any invertible P , it is invariant for any P such that P⊤P = I . Eq.(5) seems in general simpler than (6), but note that if D = d, then Π is the identity and (6) shows that both full updates are in that case actually equivalent to the full update of Adagrad applied to plain CP.\nFinally, the equivalence of the full updates discussed above strongly suggests that if our proposed update performs better than the naive application of Adagrad to PCP, it is because PDiag(ΠG(t)Π)−1/2P⊤ is a better approximation of (G(t))−1/2 than Diag(PG(t)P⊤)−1/2 while not being much more computationally expensive." }, { "heading": "9.3 CP-TUCKER", "text": "We have X = [[C;U, V,W ]] and C = [[P1, P2, P3]]. For all i, j, k, we have:\nXi,j,k = d∑ r1,r2,r3 Cr1,r2,r3Ui,r1Vi,r2Wk,r3\n= d∑ r1,r2,r3\n( D∑\ns=0\n[P1]r1,s[P2]r2,s[P3]r3,s ) Ui,r1Vi,r2Wk,r3\n= D∑ s=0 ( d∑ r1 Ui,r1 [P1]r1,s )( d∑ r2 Vj,r2 [P2]r2,s )( d∑ r3 Wk,r3 [P3]r3,s ) = ⟨P1ui, P2vj , P3wk⟩" }, { "heading": "9.4 HOLEX AND LATENT FACTOR MODEL", "text": "" }, { "heading": "9.4.1 HOLEX", "text": "The HolEx model (Xue et al., 2018) writes Xi,j,k = ∑R\nr=1⟨(cr⊙ui) ⋆uj , wrk⟩, where ⋆ denotes the circular correlation2. Exploiting the equivalence between HolE and ComplEx (Hayashi & Shimbo, 2017), denoting byF the discrete Fourier transform (with values in C), we can write for embeddings of size d:\nR∑ r=1 ⟨(cr ⊙ ui) ⋆ uj , wrk⟩ = 1 d Re\n( R∑\nr=1\n⟨F(cr ⊙ ui),F(uj),F(wrk)⟩\n)\n= 1\nd Re\n( R∑\nr=1\n⟨F(cr) ⋆ F(ui),F(uj),F(wrk)⟩\n)\nFor all vectors, we write ûi = F(ui) ∈ Cd. We can re-write the circular correlation F(cr) ⋆ F(ui) as Crûi where Cr ∈ Cd×d is the circulant matrix associated with cr ∈ Rd. We have :\nR∑ r=1 ⟨(cr ⊙ ui) ⋆ vj , wrk⟩ = 1 d Re\n( R∑\nr=1\n⟨Crûi, ûj , ŵrk⟩\n) .\nFinally, with C1 = [C1, . . . , CR] ∈ RRd×d the vertical stacking of all Cr, C2 = [Id, . . . , Id] and ŵk = [ŵ 1 k, . . . , ŵ R k ]:\nR∑ r=1 ⟨(cr ⊙ ui) ⋆ uj , wrk⟩ = 1 d Re ( ⟨C1ûi, C2ûj , ŵk⟩ ) HolEx with embeddings of size d is close to the CPT with D = Rd, allowing for two different complex matrices to act on left and right hand side embeddings." }, { "heading": "9.4.2 LATENT FACTOR MODEL", "text": "The latent factor model defines the score for a triple (i, j, k) as:\nXi,j,k = ⟨(si + z), Rj(ok + z′)⟩, with Rj = D∑\nr=1\nαjrurv ⊤ r .\nRemoving the bias terms z and z′ and gathering ur and vr into matrices P1 and P3 leads to the model CPT. In the PCP model, we fix P1 and P3 instead of learning them. We do not use a sparsity inducing penalty on α but rather a variational form of the nuclear norm on the whole tensor.\n2[a ⋆ b]k = ∑d−1\ni=0 aib(i+k) mod d." }, { "heading": "9.5 TUCKER2 WITH CP-TUCKER", "text": "Let for all 1 ≤ r ≤ d, M (r) be a matrix of zeros except its r− th column which is all one. Let P1 be the vertical concatenation of all (M (r))r=1..d and P2 the vertical concatenation of d identity matrix in Rd. Remember that for all k, wk is an element of Rd 2\n. For all 0 ≤ r < d, let wrk be the restriction of wk to its [rd, (r + 1)d] coordinates.\nThen P1 and P2 are elements of Rd 2×d and we have for all i, j, k:\n⟨P1ui, P2vj , wk⟩ = d∑\nr1=1\n⟨M (r1)ui, Idvj , wr1k ⟩\n= d∑ r1=1 ui,r1⟨vj , w r1 k ⟩ by definition of M (r1)\n= d∑ r1=1 ui,r1\n( d∑\nr2=1\nvj,r2wk,r1r2 ) by definition of wr1k\n= u⊤i Mat(wk)vj" }, { "heading": "9.6 COMPLETE PCOMPLEX ALGORITHM", "text": "Let U ∈ CN×d be the the entity embedding matrix and V ∈ C2L×D be the predicate embedding matrix. Let P : RN×d 7→ RN×D such that P(Ui) = PUi. Let GUt and GVt be the stochastic gradients with respect to U and V at time t.\nAlgorithm 3 PComplEx optimized with Adaimp\nInput: learning rate η, (random) matrix P with orthogonal columns, ϵ CU , CV ← 0 while U, V not converged do G̃Ut ← P(GUt ) CU ← CU + G̃Ut ⊙ G̃Ut CV ← CV +GVt ⊙GVt U ← U − η · P⊤(G̃Ut /( √ CU + ϵ))\nV ← V − η ·GVt /( √ CV + ϵ))\nend while return U" }, { "heading": "9.7 ADAM - IMPLICIT", "text": "For WN18RR, we used the ideas presented in this paper, but adapted to the Adam (Kingma & Ba, 2014) optimization algorithm. Similar to Adaimp, first and second moment estimates are accumulated in RD and we project back to Rd only for the parameter update. For simplicity, we present here the dense version of the algorithm applied to the entity embeddings U ∈ RN×d. Let P : RN×d 7→ RN×D such that P(Ui) = PUi. Let Gt be the stochastic gradient with respect to U at time t.\n9.8 ADArow\nFor the same control experiment as in Section 5, we observe the performances of Adarow which is rotation invariant by design.\nAlgorithm 4 Adamimp applied to U Input: η, β1, β2,P, ϵ m0, v0, t← 0 while U not converged do t← t+ 1 G̃t ← P(Gt) mt ← β1 ·mt−1 + (1− β1) · G̃t vt ← β2 · vt−1 + (1− β2) · G̃t ⊙ G̃t m̂t ← mt/(1− βt1) v̂t ← vt/(1− βt2) U ← U − η · P⊤(m̂t/( √ v̂t + ϵ))\nend while return U" }, { "heading": "9.9 VARIANCE OF PCOMPLEX", "text": "We run 5 grid search and plot the 5 associated convex-hulls on the WN18 dataset optimized with Adaimp. Note that despite the added randomness of the choice of P , there is not more variance in PComplEx than in ComplEx.\n0 20 40 60 80 100\nparams per entities\n0.2\n0.4\n0.6\n0.8\nM R R\nVariance of PComplEx on WN18\nComplEx\nPComplEx" }, { "heading": "9.10 FB15K DATASETS", "text": "We use two subsets of the Freebase knowledge base : FB15K (Bordes et al., 2013) and FB15K237 (Toutanova & Chen, 2015). FB15K-237 is a harder version of FB15K, where some triples have been removed to avoid leakage between the train and test set. There is no difference of performances between PComplEx and ComplEx on these datasets.\n10 1\n10 2\n10 3\nParams per entities\n0.4\n0.6\n0.8\nM R R\nFB15K\nComplEx\nPComplEx\nTuckER\nHolEx\n10 1\n10 2\n10 3\n10 4\nParams per entities\n0.275\n0.300\n0.325\n0.350\nM R R\nFB15K-237\nTuckER\nD/SRT\nMurP\nConvE" }, { "heading": "9.11 EXPERIMENTAL DETAILS", "text": "Metrics Let rank(X̂i,j,:; k) be the rank of X̂i,j,k in the sorted list of values of X̂i,j,:. We report the MRR for most datasets :\nMRR(X) = 1 |S| ∑\n(i,j,k)∈S\n1\nrank(Xi,j,:; k) .\nFor SVO, the task is slightly different as the ranking happens on the predicate mode. The metric reported in Jenatton et al. (2012) is Hits@5% defined as :\nH@5%(X) = 1 |S| ∑\n(i,j,k)∈S\n1(rank(Xi,:,k; j) ≤ 227).\nThe metrics we report are filtered by ignoring other true positives when computing the ranks, as done in Bordes et al. (2013).\nNumber of parameters per entities We count the number of floats in the model, and divide by the number of entities in the dataset. For different methods, the number of parameters are :\n• ComplEx: 2 · d · (N + 2L) • PComplEx: 2 · d ·N + 2 ·D · 2L+ d ·D • TuckEr: N · de + L · dp + d2e · dp • MurP: N · (d+ 1) + 2 · L · d • ConvE: taken from Dettmers et al. (2018) • D/SRT: taken from Wang et al. (2019) • HolEx: d · (N + L)\nGrid Search For SVO:\n• For Complex vary d in [5, 25, 50, 100, 500, 1000, 2000]. For PComplEx, we vary d in [5, 25, 50, 100, 500]. • The strength of the regularizer, ν varies in [5e− 4, 1e− 3, 5e− 3, 1e− 2, 5e− 2, 1e− 1]. • Finally, for PComplEx, we vary the dimension D in [5, 25, 50, 100, 500, 1000, 2000, 4000, 8000].\nFor all other datasets, we run 500 epochs :\n• For FB15K and FB15K-237, we vary d in [5, 25, 50, 100, 500, 1000, 2000]. For YAGO3-10, WN18 and WN18RR, we add ranks 8 and 16 to that list. • The strength of the regularizer, ν varies in [5e− 4, 1e− 3, 5e− 3, 1e− 2, 5e− 2, 1e− 1]. • Finally, for PComplEx, we vary the dimension D in [5, 25, 50, 100, 500, 1000, 2000].\nFor TuckEr, we use the hyperparameters described in Balažević et al. (2019b) for each dataset. We apply a multiplicative factor γ to the dropout rates and run a grid-search over this parameter.\n• For WN18RR, we vary de in [12, 15, 18, 20, 40, 100, 200]. For FB15K-237, we vary de in [10, 20, 30, 40, 50, 80, 100, 200]. For WN18, de varies in [5, 8, 16, 25, 50, 100]. For FB15K de varies in . • For WN18RR, WN18 we search γ over [0, 0.5, 1]. For FB15K-237 where this lead to non\nincreasing performances as a function of number of parameters, we refined this grid to [0, 0.1, 0.2, ..., 0.7] for de ≤ 80. For FB15K, we use a grid of [0, 0.2, 0.4, ...1].\nFor MurP, we use the hyperparameters described in Balažević et al. (2019a) for each dataset and vary d in [10, 12, 15, 18, 20, 40, 100, 200] on WN18RR and in [10, 20, 40, 100, 200] for FB15K-237.\nOther details Batch-size is fixed to 1000, the learning-rate is 1e − 1 both for Adagrad and for Adaimp, we run 100 epochs to ensure convergence. The best model based on validation MRR is selected and we report the corresponding test MRR." }, { "heading": "9.12 RUNNING TIMES", "text": "We give here the running times of each method per epoch, on the WN18RR dataset for comparable dimensionalities and run on a P100 GPU. We use the original implementation for MurP (Balažević et al., 2019a) and TuckEr (Balažević et al., 2019b). For ComplEx, we use the implementation from Lacroix et al. (2018).\n• ComplEx (d = 200) : 4s/epoch. • PComplEx (d = D = 200, Adaimp) : 5s/epoch. • MurP (d = 200): 38s/epoch. • TuckEr (de = 200, dr = 200): 24s/epoch\nNote that these running times are implementation dependent. In the figure below, we give the learning-curves of MurP, TuckEr, ComplEx and PComplEx for one operating point on the WN18RR dataset. The convergence speed of these methods is given in the figure below at an operating point on WN18RR where all methods are close in final performances.\n100 200 300 400 500\nEpoch\n0.35\n0.40\n0.45\n0.50\nM R R\nConvergence on WN18RR\nMurP\nTuckEr\nComplEx\nPComplEx" }, { "heading": "9.13 DATASET STATISTICS", "text": "" } ]
2,019
null
SP:62a75399aa97a61432385cf1dffabb674741a18a
[ "This paper proposed to remove all bias terms in denoising networks to avoid overfitting when different noise levels exist. With analysis, the paper concludes that the dimensions of subspaces of image features are adaptively changing according to the noise level. An interesting result is that the MSE is proportional to sigma instead of sigma^2 when using bias-free networks, which provides some theoretical evidence of advantage of using BF-CNN.", "This paper looks at how deep convolutional neural networks for image denoising can generalize across various noise levels. First, they argue that state-of-the-art denoising networks perform poorly outside of the training noise range. The authors empirically show that as denoising performance degrades on unseen noise levels, the network residual for a specific input is being increasingly dominated by the network bias (as opposed to the purely linear Jacobian term). Therefore, they propose using bias-free convolutional neural networks for better generalization performance in image denoising. Their experimental results show that bias-free denoisers significantly outperform their original counter-parts on unseen noise levels across various popular architectures. Then, they perform a local analysis of the bias-free network around an input image that is now a strictly linear function of the input. They empirically demonstrate that the Jacobian is approximately low-rank and symmetric, therefore the effect of the denoiser can be interpreted as a nonlinear adaptive filter that projects the noisy image onto a low-dimensional signal subspace. The authors show that most of the energy of the clean image falls into the signal subspace and the effective dimensionality of this subspace is inversely proportional to the noise level." ]
We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new noise levels. We show that strong generalization can be achieved through a simple architectural modification: removing all additive constants. The resulting "bias-free" networks attain state-of-the-art performance over a broad range of noise levels, even when trained over a narrow range. They are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both image structure and noise level. In addition, our analysis reveals that deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace, with dimensionality inversely proportional to noise level, that captures features of natural images.
[ { "affiliations": [], "name": "Sreyas Mohan" }, { "affiliations": [], "name": "Zahra Kadkhodaie" }, { "affiliations": [], "name": "Eero P. Simoncelli" }, { "affiliations": [], "name": "Carlos Fernandez-Granda" } ]
[ { "authors": [ "S Grace Chang", "Bin Yu", "Martin Vetterli" ], "title": "Adaptive wavelet thresholding for image denoising and compression", "venue": "IEEE Trans. Image Processing,", "year": 2000 }, { "authors": [ "Yunjin Chen", "Thomas Pock" ], "title": "Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration", "venue": "IEEE Trans. Patt. Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Sungjoon Choi", "John Isidoro", "Pascal Getreuer", "Peyman Milanfar" ], "title": "Fast, trainable, multiscale denoising", "venue": "25th IEEE International Conference on Image Processing (ICIP),", "year": 2018 }, { "authors": [ "Kostadin Dabov", "Alessandro Foi", "Vladimir Katkovnik", "Karen Egiazarian" ], "title": "Image denoising with block-matching and 3d filtering", "venue": "In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning,", "year": 2006 }, { "authors": [ "D Donoho", "I Johnstone" ], "title": "Adapting to unknown smoothness via wavelet shrinkage", "venue": "J American Stat Assoc,", "year": 1995 }, { "authors": [ "Michael Elad", "Michal Aharon" ], "title": "Image denoising via sparse and redundant representations over learned dictionaries", "venue": "IEEE Trans. on Image processing,", "year": 2006 }, { "authors": [ "Y Hel-Or", "D Shaked" ], "title": "A discriminative approach for wavelet denoising", "venue": "IEEE Trans. Image Processing,", "year": 2008 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proc. IEEE Conf. Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "D. Martin", "C. Fowlkes", "D. Tal", "J. Malik" ], "title": "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics", "venue": "In Proc. 8th Int’l Conf. Computer Vision,", "year": 2001 }, { "authors": [ "Peyman Milanfar" ], "title": "A tour of modern image filtering: New insights and methods, both practical and theoretical", "venue": "IEEE signal processing magazine,", "year": 2012 }, { "authors": [ "Grégoire Montavon", "Sebastian Lapuschkin", "Alexander Binder", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Explaining nonlinear classification decisions with deep taylor decomposition", "venue": "Pattern Rec.,", "year": 2017 }, { "authors": [ "M Raphan", "E P Simoncelli" ], "title": "Optimal denoising in redundant representations", "venue": "IEEE Trans Image Processing,", "year": 2008 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Uwe Schmidt", "Stefan Roth" ], "title": "Shrinkage fields for effective image restoration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "E P Simoncelli", "E H Adelson" ], "title": "Noise removal via Bayesian wavelet coring", "venue": "Proc 3rd IEEE Int’l Conf on Image Proc,", "year": 1996 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Carlo Tomasi", "Roberto Manduchi" ], "title": "Bilateral filtering for gray and color images", "venue": "In ICCV,", "year": 1998 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Trans. Image Processing,", "year": 2004 }, { "authors": [ "Norbert Wiener" ], "title": "Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications", "venue": null, "year": 1950 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Yunjin Chen", "Deyu Meng", "Lei Zhang" ], "title": "Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising", "venue": "IEEE Trans. Image Processing,", "year": 2017 }, { "authors": [ "Xiaoshuai Zhang", "Yiping Lu", "Jiaying Liu", "Bin Dong" ], "title": "Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration", "venue": "arXiv preprint arXiv:1805.07709,", "year": 2018 }, { "authors": [ "Yulun Zhang", "Yapeng Tian", "Yu Kong", "Bineng Zhong", "Yun Fu" ], "title": "Residual dense network for image restoration", "venue": "CoRR, abs/1812.10477,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION AND CONTRIBUTIONS", "text": "The problem of denoising consists of recovering a signal from measurements corrupted by noise, and is a canonical application of statistical estimation that has been studied since the 1950’s. Achieving high-quality denoising results requires (at least implicitly) quantifying and exploiting the differences between signals and noise. In the case of photographic images, the denoising problem is both an important application, as well as a useful test-bed for our understanding of natural images. In the past decade, convolutional neural networks (LeCun et al., 2015) have achieved state-of-the-art results in image denoising (Zhang et al., 2017; Chen & Pock, 2017). Despite their success, these solutions are mysterious: we lack both intuition and formal understanding of the mechanisms they implement. Network architecture and functional units are often borrowed from the image-recognition literature, and it is unclear which of these aspects contributes to, or limits, the denoising performance. The goal of this work is advance our understanding of deep-learning models for denoising. Our contributions are twofold: First, we study the generalization capabilities of deep-learning models across different noise levels. Second, we provide novel tools for analyzing the mechanisms implemented by neural networks to denoise natural images.\nAn important advantage of deep-learning techniques over traditional methodology is that a single neural network can be trained to perform denoising at a wide range of noise levels. Currently, this is achieved by simulating the whole range of noise levels during training (Zhang et al., 2017). Here, we show that this is not necessary. Neural networks can be made to generalize automatically across noise\n∗Equal contribution.\nlevels through a simple modification in the architecture: removing all additive constants. We find this holds for a variety of network architectures proposed in previous literature. We provide extensive empirical evidence that the main state-of-the-art denoising architectures systematically overfit to the noise levels in the training set, and that this is due to the presence of a net bias. Suppressing this bias makes it possible to attain state-of-the-art performance while training over a very limited range of noise levels.\nThe data-driven mechanisms implemented by deep neural networks to perform denoising are almost completely unknown. It is unclear what priors are being learned by the models, and how they are affected by the choice of architecture and training strategies. Here, we provide novel linear-algebraic tools to visualize and interpret these strategies through a local analysis of the Jacobian of the denoising map. The analysis reveals locally adaptive properties of the learned models, akin to existing nonlinear filtering algorithms. In addition, we show that the deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace capturing features of natural images." }, { "heading": "2 RELATED WORK", "text": "The classical solution to the denoising problem is the Wiener filter (Wiener, 1950), which assumes a translation-invariant Gaussian signal model. The main limitation of Wiener filtering is that it over-smoothes, eliminating fine-scale details and textures. Modern filtering approaches address this issue by adapting the filters to the local structure of the noisy image (e.g. Tomasi & Manduchi (1998); Milanfar (2012)). Here we show that neural networks implement such strategies implicitly, learning them directly from the data.\nIn the 1990’s powerful denoising techniques were developed based on multi-scale (\"wavelet\") transforms. These transforms map natural images to a domain where they have sparser representations. This makes it possible to perform denoising by applying nonlinear thresholding operations in order to discard components that are small relative to the noise level (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). From a linear-algebraic perspective, these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content. The projection eliminates the orthogonal complement of the subspace, which mostly contains noise. This general methodology laid the foundations for the state-of-the-art models in the 2000’s (e.g. (Dabov et al., 2006)), some of which added a data-driven perspective, learning sparsifying transforms (Elad & Aharon, 2006), and nonlinear shrinkage functions (Hel-Or & Shaked, 2008; Raphan & Simoncelli, 2008), directly from natural images. Here, we show that deep-learning models learn similar priors in the form of local linear subspaces capturing image features.\nIn the past decade, purely data-driven models based on convolutional neural networks (LeCun et al., 2015) have come to dominate all previous methods in terms of performance. These models consist of cascades of convolutional filters, and rectifying nonlinearities, which are capable of representing a diverse and powerful set of functions. Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art results (Zhang et al., 2017; Huang et al., 2017; Ronneberger et al., 2015; Zhang et al., 2018a)." }, { "heading": "3 NETWORK BIAS IMPAIRS GENERALIZATION", "text": "We assume a measurement model in which images are corrupted by additive noise: y = x+n, where x ∈ RN is the original image, containing N pixels, n is an image of i.i.d. samples of Gaussian noise with variance σ2, and y is the noisy observation. The denoising problem consists of finding a function f : RN → RN , that provides a good estimate of the original image, x. Commonly, one minimizes the mean squared error : f = arg ming E||x − g(y)||2, where the expectation is taken over some distribution over images, x, as well as over the distribution of noise realizations. In deep learning, the denoising function g is parameterized by the weights of the network, so the optimization is over these parameters. If the noise standard deviation, σ, is unknown, the expectation must also be taken over a distribution of σ. This problem is often called blind denoising in the literature. In this work, we study the generalization performance of CNNs across noise levels σ, i.e. when they are tested on noise levels not included in the training set.\nFeedforward neural networks with rectified linear units (ReLUs) are piecewise affine: for a given activation pattern of the ReLUs, the effect of the network on the input is a cascade of linear transformations (convolutional or fully connected layers, Wk), additive constants (bk), and pointwise multiplications by a binary mask corresponding to the fixed activation pattern (R). Since each of these is affine, the entire cascade implements a single affine transformation. For a fixed noisy input image y ∈ RN with N pixels, the function f : RN → RN computed by a denoising neural network may be written\nf(y) = WLR(WL−1...R(W1y + b1) + ...bL−1) + bL = Ayy + by, (1)\nwhere Ay ∈ RN×N is the Jacobian of f(·) evaluated at input y, and by ∈ RN represents the net bias. The subscripts on Ay and by serve as a reminder that both depend on the ReLU activation patterns, which in turn depend on the input vector y.\nBased on equation 1 we can perform a first-order decomposition of the error or residual of the neural network for a specific input: y−f(y) = (I−Ay)y−by . Figure 1 shows the magnitude of the residual and the constant, which is equal to the net bias by, for a range of noise levels. Over the training range, the net bias is small, implying that the linear term is responsible for most of the denoising (see Figures 9 and 10 for a visualization of both components). However, when the network is evaluated at noise levels outside of the training range, the norm of the bias increases dramatically, and the residual is significantly smaller than the noise, suggesting a form of overfitting. Indeed, network performance\ngeneralizes very poorly to noise levels outside the training range. This is illustrated for an example image in Figure 2, and demonstrated through extensive experiments in Section 5." }, { "heading": "4 PROPOSED METHODOLOGY: BIAS-FREE NETWORKS", "text": "Section 3 shows that CNNs overfit to the noise levels present in the training set, and that this is associated with wild fluctuations of the net bias by. This suggests that the overfitting might be ameliorated by removing additive (bias) terms from every stage of the network, resulting in a biasfree CNN (BF-CNN). Note that bias terms are also removed from the batch-normalization used during training. This simple change in the architecture has an interesting consequence. If the CNN has ReLU activations the denoising map is locally homogeneous, and consequently invariant to scaling: rescaling the input by a constant value simply rescales the output by the same amount, just as it would for a linear system.\nLemma 1. Let fBF : RN → RN be a feedforward neural network with ReLU activation functions and no additive constant terms in any layer. For any input y ∈ R and any nonnegative constant α,\nfBF(αy) = αfBF(y). (2)\nProof. We can write the action of a bias-free neural network with L layers in terms of the weight matrix Wi, 1 ≤ i ≤ L, of each layer and a rectifying operator R, which sets to zero any negative entries in its input. Multiplying by a nonnegative constant does not change the sign of the entries of a vector, so for any z with the right dimension and any α > 0R(αz) = αR(z), which implies\nfBF(αy) = WLR(WL−1 · · ·R(W1αy)) = αWLR(WL−1 · · ·R(W1y)) = αfBF(y). (3)\nNote that networks with nonzero net bias are not scaling invariant because scaling the input may change the activation pattern of the ReLUs. Scaling invariance is intuitively desireable for a denoising method operating on natural images; a rescaled image is still an image. Note that Lemma 1 holds for networks with skip connections where the feature maps are concatenated or added, because both of these operations are linear.\nIn the following sections we demonstrate that removing all additive terms in CNN architectures has two important consequences: (1) the networks gain the ability to generalize to noise levels not encountered during training (as illustrated by Figure 2 the improvement is striking), and (2) the denoising mechanism can be analyzed locally via linear-algebraic tools that reveal intriguing ties to more traditional denoising methodology such as nonlinear filtering and sparsity-based techniques." }, { "heading": "5 BIAS-FREE NETWORKS GENERALIZE ACROSS NOISE LEVELS", "text": "In order to evaluate the effect of removing the net bias in denoising CNNs, we compare several state-ofthe-art architectures to their bias-free counterparts, which are exactly the same except for the absence of any additive constants within the networks (note that this includes the batch-normalization additive parameter). These architectures include popular features of existing neural-network techniques in image processing: recurrence, multiscale filters, and skip connections. More specifically, we examine the following models (see Section A for additional details):\n• DnCNN (Zhang et al., 2017): A feedforward CNN with 20 convolutional layers, each consisting of 3 × 3 filters, 64 channels, batch normalization (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and a skip connection from the initial layer to the final layer.\n• Recurrent CNN: A recurrent architecture inspired by Zhang et al. (2018a) where the basic module is a CNN with 5 layers, 3× 3 filters and 64 channels in the intermediate layers. The order of the recurrence is 4.\n• UNet (Ronneberger et al., 2015): A multiscale architecture with 9 convolutional layers and skip connections between the different scales.\n• Simplified DenseNet: CNN with skip connections inspired by the DenseNet architecture (Huang et al., 2017; Zhang et al., 2018b).\nWe train each network to denoise images corrupted by i.i.d. Gaussian noise over a range of standard deviations (the training range of the network). We then evaluate the network for noise levels that are both within and beyond the training range. Our experiments are carried out on 180 × 180 natural images from the Berkeley Segmentation Dataset (Martin et al., 2001) to be consistent with previous\nresults (Schmidt & Roth, 2014; Chen & Pock, 2017; Zhang et al., 2017). Additional details about the dataset and training procedure are provided in Section B.\nFigures 3, 11 and 12 show our results. For a wide range of different training ranges, and for all architectures, we observe the same phenomenon: the performance of CNNs is good over the training range, but degrades dramatically at new noise levels; in stark contrast, the corresponding BF-CNNs provide strong denoising performance over noise levels outside the training range. This holds for both PSNR and the more perceptually-meaningful Structural Similarity Index (Wang et al., 2004) (see Figure 12). Figure 2 shows an example image, demonstrating visually the striking difference in generalization performance between a CNN and its corresponding BF-CNN. Our results provide strong evidence that removing net bias in CNN architectures results in effective generalization to noise levels out of the training range." }, { "heading": "6 REVEALING THE DENOISING MECHANISMS LEARNED BY BF-CNNS", "text": "In this section we perform a local analysis of BF-CNN networks, which reveals the underlying denoising mechanisms learned from the data. A bias-free network is strictly linear, and its net action can be expressed as\nfBF(y) = WLR(WL−1...R(W1y)) = Ayy, (4)\nwhere Ay is the Jacobian of fBF(·) evaluated at y. The Jacobian at a fixed input provides a local characterization of the denoising map. In order to study the map we perform a linear-algebraic analysis of the Jacobian. Our approach is similar in spirit to visualization approaches– proposed in the context of image classification– that differentiate neural-network functions with respect to their input (e.g. Simonyan et al. (2013); Montavon et al. (2017))." }, { "heading": "6.1 NONLINEAR ADAPTIVE FILTERING", "text": "The linear representation of the denoising map given by equation 4 implies that the ith pixel of the output image is computed as an inner product between the ith row of Ay, denoted ay(i), and the input image:\nfBF(y)(i) = N∑ j=1 Ay(i, j)y(j) = ay(i) T y. (5)\nThe vectors ay(i) can be interpreted as adaptive filters that produce an estimate of the denoised pixel via a weighted average of noisy pixels. Examination of these filters reveals their diversity, and their relationship to the underlying image content: they are adapted to the local features of the noisy image, averaging over homogeneous regions of the image without blurring across edges. This is shown for two separate examples and a range of noise levels in Figures 4, 13, 14 and 15 for the architectures described in Section 5. We observe that the equivalent filters of all architectures adapt to image structure.\nClassical Wiener filtering (Wiener, 1950) denoises images by computing a local average dependent on the noise level. As the noise level increases, the averaging is carried out over a larger region. As illustrated by Figures 4, 13, 14 and 15, the equivalent filters of BF-CNNs also display this behavior. The crucial difference is that the filters are adaptive. The BF-CNNs learn such filters implicitly from the data, in the spirit of modern nonlinear spatially-varying filtering techniques designed to preserve fine-scale details such as edges (e.g. Tomasi & Manduchi (1998), see also Milanfar (2012) for a comprehensive review, and Choi et al. (2018) for a recent learning-based approach)." }, { "heading": "6.2 PROJECTION ONTO ADAPTIVE LOW-DIMENSIONAL SUBSPACES", "text": "The local linear structure of a BF-CNN facilitates analysis of its functional capabilities via the singular value decomposition (SVD). For a given input y, we compute the SVD of the Jacobian matrix:" }, { "heading": "Ay = USV", "text": "T , with U and V orthogonal matrices, and S a diagonal matrix. We can decompose the effect of the network on its input in terms of the left singular vectors {U1, U2 . . . , UN} (columns of U ), the singular values {s1, s2 . . . , sN} (diagonal elements of S), and the right singular vectors {V1, V2, . . . VN} (columns of V ):\nfBF(y) = Ayy = USV T y = N∑ i=1 si(V T i y)Ui. (6)\nThe output is a linear combination of the left singular vectors, each weighted by the projection of the input onto the corresponding right singular vector, and scaled by the corresponding singular value.\nAnalyzing the SVD of a BF-CNN on a set of ten natural images reveals that most singular values are very close to zero (Figure 5a). The network is thus discarding all but a very low-dimensional portion of the input image. We also observe that the left and right singular vectors corresponding to the singular values with non-negligible amplitudes are approximately the same (Figure 5b). This means that the Jacobian is (approximately) symmetric, and we can interpret the action of the network as projecting the noisy signal onto a low-dimensional subspace, as is done in wavelet thresholding schemes. This is confirmed by visualizing the singular vectors as images (Figure 6). The singular vectors corresponding to non-negligible singular values are seen to capture features of the input image; those corresponding to near-zero singular values are unstructured. The BF-CNN therefore implements\nan approximate projection onto an adaptive signal subspace that preserves image structure, while suppressing the noise.\nWe can define an \"effective dimensionality\" of the signal subspace as d := ∑N i=1 s 2 i , the amount of variance captured by applying the linear map to an N -dimensional Gaussian noise vector with variance σ2, normalized by the noise variance. The remaining variance equals\nEn||Ayn||2 = En||UySyV Ty n||2 = En||Syn||2 = En N∑ i=1 s2in 2 i = N∑ i=1 s2iEn(n 2 i ) ≈ σ2 N∑ i=1 s2i ,\nwhere En indicates expectation over noise n, so that d = En||Ayn||2/σ2 = ∑N i=1 s 2 i .\nWhen we examine the preserved signal subspace, we find that the clean image lies almost completely within it. For inputs of the form y := x+ n (where x is the clean image and n the noise), we find that the subspace spanned by the singular vectors up to dimension d contains x almost entirely, in the sense that projecting x onto the subspace preserves most of its energy. This holds for the whole range of noise levels over which the network is trained (Figure 7).\nWe also find that for any given clean image, the effective dimensionality of the signal subspace (d) decreases systematically with noise level (Figure 5c). At lower noise levels the network detects a richer set of image features, and constructs a larger signal subspace to capture and preserve them. Empirically, we found that (on average) d is approximately proportional to 1σ (see dashed line in Figure 5c). These signal subspaces are nested: the subspaces corresponding to lower noise levels contain more than 95% of the subspace axes corresponding to higher noise levels (Figure 7).\nFinally, we note that this behavior of the signal subspace dimensionality, combined with the fact that it contains the clean image, explains the observed denoising performance across different noise levels (Figure 3). Specifically, if we assume d ≈ α/σ, the mean squared error is proportional to σ:\nMSE = En||Ay(x+ n)− x||2\n≈ En||Ayn||2\n≈ σ2d ≈ ασ (7)\nNote that this result runs contrary to the intuitive expectation that MSE should be proportional to the noise variance, which would be the case if the denoiser operated by projecting onto a fixed subspace. The scaling of MSE with the square root of the noise variance implies that the PSNR of the denoised image should be a linear function of the input PSNR, with a slope of 1/2, consistent with the empirical results shown in Figure 3. Note that this behavior holds even when the networks are trained only on modest levels of noise (e.g., σ ∈ [0, 10])." }, { "heading": "7 DISCUSSION", "text": "In this work, we show that removing constant terms from CNN architectures ensures strong generalization across noise levels, and also provides interpretability of the denoising method via linear-algebra techniques. We provide insights into the relationship between bias and generalization through a set of observations. Theoretically, we argue that if the denoising network operates by projecting the noisy observation onto a linear space of “clean” images, then that space should include all rescalings of those images, and thus, the origin. This property can be guaranteed by eliminating bias from the network. Empirically, in networks that allow bias, the net bias of the trained network is quite small within the training range. However, outside the training range the net bias grows dramatically resulting in poor performance, which suggests that the bias may be the cause of the failure to generalize. In addition, when we remove bias from the architecture, we preserve performance within the training range, but achieve near-perfect generalization, even to noise levels more than 10x those in the training range. These observations do not fully elucidate how our network achieves its remarkable generalization- only that bias prevents that generalization, and its removal allows it.\nIt is of interest to examine whether bias removal can facilitate generalization in noise distributions beyond Gaussian, as well as other image-processing tasks, such as image restoration and image compression. We have trained bias-free networks on uniform noise and found that they generalize outside the training range. In fact, bias-free networks trained for Gaussian noise generalize well when tested on uniform noise (Figures 18 and 19). In addition, we have applied our methodology to image restoration (simultaneous deblurring and denoising). Preliminary results indicate that bias-free networks generalize across noise levels for a fixed blur level, whereas networks with bias do not (Figure 20). An interesting question for future research is whether it is possible to achieve generalization across blur levels. Our initial results indicate that removing bias is not sufficient to achieve this.\nFinally, our linear-algebraic analysis uncovers interesting aspects of the denoising map, but these interpretations are very local: small changes in the input image change the activation patterns of the network, resulting in a change in the corresponding linear mapping. Extending the analysis to reveal global characteristics of the neural-network functionality is a challenging direction for future research." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was partially supported by the Howard Hughes Medical Institute (HHMI)." }, { "heading": "A DESCRIPTION OF DENOISING ARCHITECTURES", "text": "In this section we describe the denoising architectures used for our computational experiments in more detail.\nA.1 DNCNN\nWe implement BF-DnCNN based on the architecture of the Denoising CNN (DnCNN) (Zhang et al., 2017). DnCNN consists of 20 convolutional layers, each consisting of 3 × 3 filters and 64 channels, batch normalization (Ioffe & Szegedy, 2015), and a ReLU nonlinearity. It has a skip connection from the initial layer to the final layer, which has no nonlinear units. To construct a bias-free DnCNN (BF-DnCNN) we remove all sources of additive bias, including the mean parameter of the batch-normalization in every layer (note however that the scaling parameter is preserved).\nA.2 RECURRENT CNN\nInspired by Zhang et al. (2018a), we consider a recurrent framework that produces a denoised image estimate of the form x̂t = f(x̂t−1, ynoisy), at time t where f is a neural network. We use a 5-layer fully convolutional network with 3× 3 filters in all layers and 64 channels in each intermediate layer to implement f . We initialize the denoised estimate as the noisy image, i.e x̂0 := ynoisy. For the version of the network with net bias, we add trainable additive constants to every filter in all but the last layer. During training, we run the recurrence for a maximum of T times, sampling T uniformly at random from {1, 2, 3, 4} for each mini-batch. At test time we fix T = 4.\nA.3 UNET\nOur UNet model (Ronneberger et al., 2015) has the following layers:\n1. conv1 - Takes in input image and maps to 32 channels with 5× 5 convolutional kernels. 2. conv2 - Input: 32 channels. Output: 32 channels. 3× 3 convolutional kernels. 3. conv3 - Input: 32 channels. Output: 64 channels. 3× 3 convolutional kernels with stride 2. 4. conv4- Input: 64 channels. Output: 64 channels. 3× 3 convolutional kernels. 5. conv5- Input: 64 channels. Output: 64 channels. 3× 3 convolutional kernels with dilation\nfactor of 2. 6. conv6- Input: 64 channels. Output: 64 channels. 3× 3 convolutional kernels with dilation\nfactor of 4. 7. conv7- Transpose Convolution layer. Input: 64 channels. Output: 64 channels. 4× 4 filters\nwith stride 2. 8. conv8- Input: 96 channels. Output: 64 channels. 3× 3 convolutional kernels. The input to\nthis layer is the concatenation of the outputs of layer conv7 and conv2. 9. conv9- Input: 32 channels. Output: 1 channels. 5× 5 convolutional kernels.\nThe structure is the same as in Zhang et al. (2018a), but without recurrence. For the version with bias, we add trainable additive constants to all the layers other than conv9. This configuration of UNet assumes even width and height, so we remove one row or column from images in with odd height or width.\nA.4 SIMPLIFIED DENSENET\nOur simplified version of the DenseNet architecture (Huang et al., 2017) has 4 blocks in total. Each block is a fully convolutional 5-layer CNN with 3 × 3 filters and 64 channels in the intermediate layers with ReLU nonlinearity. The first three blocks have an output layer with 64 channels while the last block has an output layer with only one channel. The output of the ith block is concatenated with the input noisy image and then fed to the (i+ 1)th block, so the last three blocks have 65 input channels. In the version of the network with bias, we add trainable additive parameters to all the layers except for the last layer in the final block." }, { "heading": "B DATASETS AND TRAINING PROCEDURE", "text": "Our experiments are carried out on 180 × 180 natural images from the Berkeley Segmentation Dataset (Martin et al., 2001). We use a training set of 400 images. The training set is augmented via downsampling, random flips, and random rotations of patches in these images (Zhang et al., 2017). A test set containing 68 images is used for evaluation. We train the DnCNN and it’s bias free model on patches of size 50× 50, which yields a total of 541,600 clean training patches. For the remaining architectures, we use patches of size 128× 128 for a total of 22,400 training patches. We train DnCNN and its bias-free counterpart using the Adam Optimizer (Kingma & Ba, 2014) over 70 epochs with an initial learning rate of 10−3 and a decay factor of 0.5 at the 50th and 60th epochs, with no early stopping. We train the other models using the Adam optimizer with an initial learning rate of 10−3 and train for 50 epochs with a learning rate schedule which decreases by a factor of 0.25 if the validation PSNR decreases from one epoch to the next. We use early stopping and select the model with the best validation PSNR." }, { "heading": "C ADDITIONAL RESULTS", "text": "In this section we report additional results of our computational experiments:\n• Figure 8 shows the first-order analysis of the residual of the different architectures described in Section A, except for DnCNN which is shown in Figure 1. • Figures 9 and 10 visualize the linear and net bias terms in the first-order decomposition of\nan example image at different noise levels. • Figure 11 shows the PSNR results for the experiments described in Section 5. • Figure 12 shows the SSIM results for the experiments described in Section 5. • Figures 13, 14 and 15 show the equivalent filters at several pixels of two example images for\ndifferent architectures (see Section 6.1). • Figure 16 shows the singular vectors of the Jacobian of different BF-CNNs (see Section 6.2). • Figure 17 shows the singular values of the Jacobian of different BF-CNNs (see Section 6.2). • Figure 18 and 19 shows that networks trained on noise samples drawn from Gaussian\ndistribution with 0 mean generalizes to noise drawn from uniform distribution with 0 mean during test time. Experiments follow the procedure described in Section 5 except that the networks are evaluated on a different noise distribution during the test time. • Figure 20 shows the application of BF-CNN and CNN to the task of image restoration,\nwhere the image is corrupted with both noise and blur at the same time. We show that BF-CNNs can generalize outside the training range for noise levels for a fixed blur level, but do not outperform CNN when generalizing to unseen blur levels.\nσ Noisy Input(y) Denoised(f(y) = Ayy) Pixel 1 Pixel 2 Pixel 3\nσ Noisy Input(y) Denoised(f(y) = Ayy) Pixel 1 Pixel 2 Pixel 3\nσ Noisy Input(y) Denoised(f(y) = Ayy) Pixel 1 Pixel 2 Pixel 3" } ]
2,020
ROBUST AND INTERPRETABLE BLIND IMAGE DENOISING VIA BIAS-FREE CONVOLUTIONAL NEURAL NETWORKS
SP:35407fdffbf982a97312ef16673be781d593ff22
[ " This paper proposes a method called attentive feature distillation and selection (AFDS) to improve the performance of transfer learning for CNNs. The authors argue that the regularization should constrain the proximity of feature maps, instead of pre-trained model weights. Specifically, the authors proposes two modifications of loss functions: 1) Attentive feature distillation (AFD), which modifies the regularization term to learn different weights for each channel and 2) Attentive feature selection (AFS), which modifies the ConvBN layers by predicts unimportant channels and suppress them. ", "The paper presents an improvement to the task of transfer learning by being deliberate about which channels from the base model are most relevant to the new task at hand. It does this by apply attentive feature selection (AFS) to select channels or features that align well with the down stream task and attentive feature distillation (AFD) to pass on these features to the student network. In the process they do channel pruning there by decreasing the size of the network and enabling faster inference speeds. Their major argument is that plain transfer learning is redundant and wasteful and careful attention applied to selection of the features and channels to be transfered can lead to smaller faster models which in several cases presented in the paper provide superior performance." ]
Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pretrained on large datasets. Blindly transferring all learned features from the source dataset, however, brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS), which not only adjusts the strength of transfer learning regularization but also dynamically determines the important features to transfer. By deploying AFDS on ResNet-101, we achieved a state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. With a 10× MACs reduction budget, a ResNet-101 equipped with AFDS transfer learned from ImageNet to Stanford Dogs 120, can achieve an accuracy 11.07% higher than its best competitor.
[ { "affiliations": [], "name": "Kafeng Wang" }, { "affiliations": [], "name": "Xitong Gao" }, { "affiliations": [], "name": "Yiren Zhao" }, { "affiliations": [], "name": "Xingjian Li" }, { "affiliations": [], "name": "Dejing Dou" }, { "affiliations": [], "name": "Cheng-Zhong Xu" } ]
[ { "authors": [ "Jose M Alvarez", "Mathieu Salzmann" ], "title": "Learning the number of neurons in deep networks", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Hossein Azizpour", "Ali Sharif Razavian", "Josephine Sullivan", "Atsuto Maki", "Stefan Carlsson" ], "title": "From generic to specific deep representations for visual recognition", "venue": "In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2015 }, { "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "title": "Adaptive neural networks for efficient inference", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101 – mining discriminative components with random forests", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Cascade R-CNN: Delving into high quality object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yves Chauvin" ], "title": "A back-propagation algorithm with optimal use of hidden units", "venue": "Advances in Neural Information Processing Systems", "year": 1989 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A largescale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "DeCAF: A deep convolutional activation feature for generic visual recognition", "venue": "Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Xin Dong", "Shangyu Chen", "Sinno Pan" ], "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Xuanyi Dong", "Junshi Huang", "Yi Yang", "Shuicheng Yan" ], "title": "More is less: A more complicated network with less inference complexity", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Michael Figurnov", "Maxwell D. Collins", "Yukun Zhu", "Li Zhang", "Jonathan Huang", "Dmitry Vetrov", "Ruslan Salakhutdinov" ], "title": "Spatially adaptive computation time for residual networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Xitong Gao", "Yiren Zhao", "Lukasz Dudziak", "Robert Mullins", "Cheng-zhong Xu" ], "title": "Dynamic channel pruning: Feature boosting and suppression", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Gregory Griffin", "Alex Holub", "Pietro Perona" ], "title": "Caltech-256 object category dataset", "venue": "Technical report,", "year": 2007 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Yurong Chen" ], "title": "Dynamic network surgery for efficient DNNs", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Babak Hassibi", "David G. Stork", "Gregory Wolff" ], "title": "Optimal brain surgeon: Extensions and performance comparisons", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 1994 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification", "venue": "In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yang He", "Guoliang Kang", "Xuanyi Dong", "Yanwei Fu", "Yi Yang" ], "title": "Soft filter pruning for accelerating deep convolutional neural networks", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network. Advances in neural information processing systems 2014", "venue": "Deep Learning Workshop,", "year": 2014 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Yunhun Jang", "Hankook Lee", "Sung Ju Hwang", "Jinwoo Shin" ], "title": "Learning what and where to transfer", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Aditya Khosla", "Nityananda Jayadevaprakash", "Bangpeng Yao", "Li Fei-Fei" ], "title": "Novel dataset for fine-grained image categorization", "venue": "In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2011 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 1990 }, { "authors": [ "Xingjian Li", "Haoyi Xiong", "Hanchao Wang", "Yuxuan Rao", "Liping Liu", "Jun Huan" ], "title": "DELTA: Deep learning transfer using feature map with attention for convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Xuhong Li", "Yves Grandvalet", "Franck Davoine" ], "title": "Explicit inductive bias for transfer learning with convolutional networks", "venue": "Thirty-fifth International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Ji Lin", "Yongming Rao", "Jiwen Lu", "Jie Zhou" ], "title": "Runtime neural pruning", "venue": null, "year": 2020 }, { "authors": [ "R. Reed" ], "title": "Pruning algorithms–a survey", "venue": "IEEE Transactions on Neural Networks,", "year": 2014 }, { "authors": [ "Mengye Ren", "Andrei Pokrovsky", "Bin Yang", "Raquel Urtasun" ], "title": "SBNet: Sparse blocks", "venue": null, "year": 1993 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster R-CNN", "venue": null, "year": 2018 }, { "authors": [ "2017. Mingxing Tan", "Quoc Le" ], "title": "EfficientNet: Rethinking model scaling for convolutional neural", "venue": null, "year": 2017 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The Caltech-UCSD Birds", "venue": null, "year": 2017 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Jihoon Bae", "Junmo Kim" ], "title": "informative assumption in channel pruning of convolution layers", "venue": null, "year": 2018 }, { "authors": [ "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features", "venue": null, "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "Despite recent successes of CNNs achieving state-of-the-art performance in vision applications (Tan & Le, 2019; Cai & Vasconcelos, 2018; Zhao et al., 2018; Ren et al., 2015), there are two major shortcomings limiting their deployments in real life. First, training CNNs from random initializations to achieve high task accuracy generally requires a large amount of data that is expensive to collect. Second, CNNs are typically compute-intensive and memory-demanding, hindering their adoption to power-limited scenarios.\nTo address the former challenge, transfer learning (Pan & Yang, 2009) is thus designed to transfer knowledge learned from the source task to a target dataset that has limited data samples. In practice, we often choose a source dataset such that the input domain of the source comprises the domain of the target. A common paradigm for transfer learning is to train a model on a large source dataset, and then fine-tune the pre-trained weights with regularization methods on the target dataset (Zagoruyko & Komodakis, 2017; Yim et al., 2017; Li et al., 2018; Li & Hoiem, 2018; Li et al., 2019). For example, one regularization method, L2-SP (Li et al., 2018), penalizes the L2-distances of pretrained weights on the source dataset and the weights being trained on the target dataset. The pretrained source weights serves as a starting point when training on the target data. During fine-tuning on the target dataset, the regularization constrains the search space around this starting point, which in turn prevents overfitting the target dataset.\nIntuitively, the responsibility of transfer learning is to preserve the source knowledge acquired by important neurons. The neurons thereby retain their abilities to extract features from the source domain, and contribute to the network’s performance on the target dataset.\n∗Equal contribution, corresponding authors. †Work partially done during an internship at Baidu Research.\nMoreover, by determining the importance of neurons, unimportant ones can further be removed from computation during inference with network pruning methods (Luo et al., 2017; He et al., 2017; Zhuang et al., 2018; Ye et al., 2018; Gao et al., 2019). The removal of unnecessary compute not only makes CNNs smaller in size but also reduces computational costs while minimizing possible accuracy degradations. As the source domain encompasses the target, many neurons responsible for extracting features from the source domain may become irrelevant to the target domain and can be removed. In Figure 1, a simple empirical study of the channel neurons’ activation magnitudes corroborates our intuition: as deeper layers extract higher-level features, more neurons become either specialized or irrelevant to dogs. The discussion above hence prompts two questions regarding the neurons: which neurons should we transfer source knowledge to, and which are actually important to the target model?\nYet traditional transfer learning methods fail to provide answers to both, as generally they transfer knowledge either equally for each neuron with the same regularized weights, or determine the strength of regularization using only the source dataset (Li et al., 2018). The source domain could be vastly larger than the target, giving importance to weights that are irrelevant to the target task.\nRecent years have seen a surge of interest in network pruning techniques, many of which induce sparsity by pushing neuron weights or outputs to zeros, allowing them to be pruned without a detrimental impact on the task accuracies. Even though pruning methods present a solution to neuron/weight importance, unfortunately they do not provide an answer to the latter question, i.e. whether these neurons/weights are important to the target dataset. The reason for this is that pruning optimization objectives are often in conflict with traditional transfer learning, as both drive weight values in different directions: zero for pruning and the initial starting point for transfer learning. As we will see later, a näıve composition of the two methods could have a disastrous impact on the accuracy of a pruned CNN transferlearned on the target dataset.\nIn this paper, to tackle the challenge of jointly transferring source knowledge and pruning target CNNs, we propose a new method based on attention mechanism (Vaswani et al., 2017), attentive feature distillation and selection (AFDS). For the images in the target dataset, AFDS dynamically learns not only the features to transfer, but also the unimportant neurons to skip.\nDuring transfer learning, instead of fine-tuning with L2-SP regularization which explores the proximity of the pre-trained weights, we argue that a better alternative is to mimic the feature maps, i.e. the output response of each convolutional layer in the source model when images from the target dataset are shown, with L2-distances. This way the fine-tuned model can still learn the behavior of the source model. Additionally, without the restriction of searching only the proximity of the initial position, the weights in the target model can be optimized freely and thus increasing their generalization capacity. Therefore, we present attentive feature distillation (AFD) to learn which relevant features to transfer.\nTo accelerate the transfer-learned model, we further propose attentive feature selection (AFS) to prune networks dynamically. AFS is designed to learn to predictively select important output channels in the convolution to evaluate and skip unimportant ones, depending\non the input to the convolution. Rarely activated channel neurons can further be removed from the network, reducing the model’s memory footprint.\nFrom an informal perspective, both AFD and AFS learn to adjust the “valves” that control the flow of information for each channel neuron. The former adjusts the strength of regularization, thereby tuning the flow of knowledge being transferred from the source model. The latter allows salient information to pass on to the subsequent layer and stops the flow of unimportant information. A significant attribute that differentiates AFD and AFS from their existing counterparts is that we employ attention mechanisms to adaptively learn to “turn the valves” dynamically with small trainable auxiliary networks.\nOur main contributions are as follows:\n• We present attentive feature distillation and selection (AFDS) to effectively transfer learn CNNs, and demonstrate state-of-the-art performance on many publicly available datasets with ResNet-101 (He et al., 2016) models transfer learned from ImageNet (Deng et al., 2009).\n• We paired a large range of existing transfer learning and network pruning methods, and examined their abilities to trade-off FLOPs with task accuracy.\n• By changing the fraction of channel neurons to skip for each convolution, AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy. We found that AFDS generally provides the best FLOPs and accuracy trade-off when compared to a broad range of paired methods." }, { "heading": "2 Related Work", "text": "" }, { "heading": "2.1 Transfer Learning", "text": "Training a deep CNN to achieve high accuracy generally require a large amount of training data, which may be expensive to collect. Transfer learning (Pan & Yang, 2009) addresses this challenge by transferring knowledge learned on a large dataset that has a similar domain to the training dataset. A typical approach for CNNs is to first train the model on a large source dataset, and make use of their feature extraction abilities (Donahue et al., 2014; Razavian et al., 2014). Moreover, it has been demonstrated that the task accuracy can be further improved by fine-tuning the resulting pre-trained model on a smaller target dataset with a similar domain but a different task (Yosinski et al., 2014; Azizpour et al., 2015). Li et al. (2018) proposed L2-SP regularization to minimize the L2-distance between each fine-tuned parameter and its initial pre-trained value, thus preserving knowledge learned in the pre-trained model. In addition, they presented L2-SP-Fisher, which further weighs each L2-distance using Fisher information matrix estimated from the source dataset. Instead of constraining the parameter search space, Li et al. (2019) showed that it is often more effective to regularize feature maps during fine-tuning, and further learns which features to pay attention to. Learning without Forgetting (Li & Hoiem, 2018) learns to adapt the model to new tasks, while trying to match the output response on the original task of the original model using knowledge distillation (KD) (Hinton et al., 2014). Methods proposed by Zagoruyko & Komodakis (2017) and Yim et al. (2017) transfer knowledge from a teacher model to a student by regularizing features. The former computes and regularizes spatial statistics across all feature maps channels, whereas the latter estimates the flow of information across layers for each pair of channels, and transfers this knowledge to the student. Instead of manually deciding the regularization penalties and what to regularize as in the previous approaches, Jang et al. (2019) used meta-learning to automatically learn what knowledge to transfer from the teacher and to where in the student model.\nInspired by Li et al. (2019) and Jang et al. (2019), this paper introduces attentive feature distillation (AFD), which similarly transfers knowledge by learning from the teacher’s feature maps. It however differs from Jang et al. (2019) as the teacher and student models share the same network topology, and it instead learns which channel to transfer from the teacher to the student in the same convolutional output." }, { "heading": "2.2 Structured Sparsity", "text": "Sparsity in neural networks has been a long-studied subject (Reed, 1993; LeCun et al., 1990; Chauvin, 1989; Mozer & Smolensky, 1989; Hassibi et al., 1994). Related techniques have been applied to modern deep CNNs with great success (Guo et al., 2016; Dong et al., 2017a), significantly lowering their storage requirements. In general, as these methods zero out individual weights, producing irregular sparse connections, which cannot be efficiently exploited by GPUs to speed up computation.\nFor this, many recent work turned their attention to structured sparsity (Alvarez & Salzmann, 2016; Wen et al., 2016; Liu et al., 2017; He et al., 2017; 2018). This approach aims to find coarse-grained sparsity and preserves dense structures, thus allowing conventional GPUs to compute them efficiently. Alvarez & Salzmann (2016) and Wen et al. (2016) both added group Lasso to penalize non-zero weights, and removed channels entirely that have been reduced to zero. Liu et al. (2017) proposed network slimming (NS), which adds L1 regularization to the trainable channel-wise scaling parameters γ used in batch normalization, and gradually prunes channels with small γ values by threshold. He et al. (2018) introduced soft filter pruning (SFP), which iteratively fine-tunes and sets channels with small L2-norms to zero.\nPruning algorithms remove weights or neurons from the network. The network may therefore lose its ability to process some difficult inputs correctly, as the neurons responsible for them are permanently discarded. Gao et al. (2019) have found empirically that task accuracies degrades considerably when most of the computation are removed from the network, and introduced feature boosting and suppression (FBS). Instead of removing neurons permanently from the network, FBS learns to dynamically prune unimportant channels, depending on the current input image. In this paper, attentive feature selection (AFS) builds on top of the advantages of both static and dynamic pruning algorithms. AFS not only preserves neurons that are important to some input images, but also removes unimportant ones for most inputs from the network, reducing both the memory and compute requirements for inference.\nThere are methods that dynamically select which paths to evaluate in a network dependent on the input (Figurnov et al., 2017; Dong et al., 2017b; Bolukbasi et al., 2017; Lin et al., 2017; Shazeer et al., 2017; Wu et al., 2018; Ren et al., 2018). They however introduce architectural and/or training method changes, and thus cannot be applied directly on existing popular models pre-trained on ImageNet (Deng et al., 2009)." }, { "heading": "3 Attentive Feature Distillation and Selection", "text": "" }, { "heading": "3.1 High-Level Overview", "text": "We begin by providing a high-level overview of attentive feature distillation and selection (AFDS). AFDS introduces two new components to augment each conventional batchnormalized convolutional (ConvBN) layer (Ioffe & Szegedy, 2015), as illustrated in Figure 2. The AFS preemptively learns the importance of each channel, in the output of the ConvBN layer, and can suppress unimportant channels, thus allowing the expensive convolution op-\neration to skip evaluating these channels. The AFD learns the importance of each channel in the output activation, and use the importance as weights to regularize feature maps in the target model with L2-distance. Each component is a small neural network containing a small number of parameters that can be trained with conventional stochastic gradient descent (SGD)." }, { "heading": "3.2 Preliminaries", "text": "Consider a set of training data D where each sample (x, y) consists of an input image x ∈ RC×H×W , and a ground-truth label y ∈ N. Here C, H and W respectively denote the number of channels, and the height and width of the input image. Training a deep CNN classifier thus minimizes the following loss function with an optimization method based on SGD:\nL(θ) = E(x,y)∼D[LCE(f(x,θ), y) +R(θ,x) + λ‖θ‖ 2 2], (1)\nwhere θ comprises all parameters of the model, the loss LCE(f(x,θ), y) denotes the crossentropy loss between the CNN output f(x,θ) and the label y. The regularizer R(θ,x) is often used to reduce the risk of overfitting. In conventional training, R(θ,x) = 0. Finally, we impose a L2 penalty on θ, where ‖z‖2 represents the L2-norm of z across all its elements. We assume that f(x,θ) is a feed-forward CNN composed of N ConvBN layers for feature extraction, fl(xl−1,θl) with l ∈ L = {1, 2, . . . , N}, and a final fully-connected layer for classification, g(xN ,θg). Here, for the l\nth layer, xl−1 is the input to the layer, with x0 indicating x, and θl is the layer’s parameters. Therefore, the l th layer is defined as:\nxl = fl(xl−1,θl) = relu(γl · norm(conv(xl−1,θl)) + βl), (2)\nwhere xl ∈ RCl×Hl×Wl contains Cl feature maps of the layer, each with a Hl height and Wl width. The function conv(xl−1,θl) is a convolution that takes xl−1 as input and uses trainable parameters θl, and norm(z) performs batch normalization. Finally, γl,βl ∈ RCl are trainable vectors, the multiplications (·) and additions (+) are channel-wise, and relu(z) = max(z, 0) stands for the ReLU activation. Although we use the feed-forward classifier above for simplicity, it can be easily modified to contain additional structures such as residual connections (He et al., 2016) and computations for object detection (Ren et al., 2015).\nDuring transfer learning, as we fine-tune the network with a different task, the final layer g(xN ,θg) is generally replaced with a new randomly-initialized one h(xN ,θh). To prevent overfitting, additional terms are used during transfer learning, for instance, L2-SP (Li et al., 2018) further constrains the parameters θl to explore around their initial values θ ? l :\nR(θ,x) = λSP ∑ l∈L ‖θl − θ?l ‖ 2 2 + λL2‖θ‖ 2 2. (3)\nInstead of regularizing parameters, methods based on knowledge distillation (Hinton et al., 2014) encourages the model to mimic the behavior of the original while learning the target task. Learning without Forgetting (LwF) (Li & Hoiem, 2018) uses the following regularizer to mimic the response from the original classifiers:\nR(θ,x) = λLwF LCE(g?(fL(x,θL),θ?g)), (4)\nwhere fL(x,θL) indicates the first N layers, and g ? and θ?g respectively denote the original fully-connected (FC) layer and its associated parameters, and generally λLwF = 1. Zagoruyko & Komodakis (2017), Yim et al. (2017) and Li et al. (2019) chose to regularize feature maps in some intermediate layers L′ ⊆ L. We assume that x?l is the lth layer output of the original model with weights θ? when the input x is shown to the model, and r is a method-dependent function that constrains the relationship between x?l and xl. The regularizer can then be defined as follows:\nR(θ,x) = λKD ∑ l∈L′ r(x?l ,xl). (5)" }, { "heading": "3.3 Attentive Feature Distillation", "text": "A simple way to extend Equation (5) is to constrain the L2-norm-distance between x?l and xl, and thus pushing the target model to learn the feature map responses of the source:\nR(θ,x) = λFD ∑ l∈L′ ‖x?l − xl‖22. (6)\nThe above formulation, however, places equal weight to each channel neurons of the feature maps. As we discussed earlier, the importance of channel neurons varies drastically when different input images are shown. it is thus desirable to enforce a different penalty for each channel depending on the input x. For this purpose, we design the regularizer:\nR(θ,x) = λAFD ∑ l∈L′ ∑ c∈Cl ρ [c] l (x ? l )‖(x?l − xl)[c]‖22. (7)\nNote that in Equation (7), for any tensor z, the term z[c] denotes the cth slice of the tensor. The transfer importance predictor ρl : RCl×Hl×Wl → RCl computes for each channel the importance of the source activation maps, which governs the strength of the L2 regularization for each channel. The predictor function is trainable and is defined as a small network with two FC layers:\nρ [c] l (x ? l ) = softmax(relu([(x ? l )ϕl + νl)ϕ ′ l + ν ′ l). (8)\nThe function [ : RC×H×W → RC×HW flattens the spatial dimensions in a channel-wise fashion; The parameters ϕl ∈ RHW×H , νl ∈ R1×H , ϕ′l ∈ RH and ν′l ∈ RC can thus be trained to adjust the importance of each channel dynamically; finally, the softmax activation is borrowed from attention mechanism (Vaswani et al., 2017) to normalize the importance values. In our experiments, ϕl and ϕ ′ l use He et al. (2015)’s initialization, νl and ν ′ l are both initialized to 0." }, { "heading": "3.4 Attentive Feature Selection", "text": "In a fashion similar to feature boosting and suppression (FBS) (Gao et al., 2019), AFS modifies the ConvBN layers from Equation (2):\nf̂l(xl−1,θl) = relu(πl(xl−1) · norm(conv(xl−1,θl)) + βl), (9)\nwhere the predictor function takes as input the activation maps of the previous layer, i.e. πl : RCl−1×Hl−1×Wl−1 → RC , is used to replace the vector γl. This function dynamically predicts the importance of each channel, and suppresses certain unimportant channels by setting them to zero. The expensive conv function can hence be accelerated by skipping the disabled output channels. The predictor function is defined as below:\nπl(xl−1) = ml · ql(xl−1), where ql(xl−1) = wtaddCle(sl · hl(xl−1) + (1− sl) · γl), (10)\nwhere ml, sl ∈ {0, 1}Cl are both constant masks that take binary values: ml prunes output channels by permanently setting them to zeros, and sl decides for each channel whether the output of hl(xl−1) or γl should be used. It is clear that when ml = 1, no channel neurons are removed from the network. In Section 3.5, we explain how ml and γl can be determined during the fine-tuning process. The winner-take-all function wtaddCle(z) preserves the ddCle most salient values in z, and suppresses the remaining ones by setting them to zeros. The density value 0 < d ≤ 1 is a constant that controls the number of channels to preserve during inference, with 1 preserving all Cl channels. The smaller d gets, the more channels can be skipped, which in turn accelerates the model. Finally, the function hl : RCl−1×H×W → RCl is a small network that is used to predict the importance of each channel. It is composed of a global average pool followed by a FC layer, where pool : RCl−1×H×W → RCl−1 computes the average across the spatial dimensions for each channel:\nh(xl−1) = relu(pool(xl−1)ϕ ′′ l + ν ′′ l ). (11)\nFor the initialization of the FC parameters, we apply He et al. (2015)’s method on the trainable weights ϕ′′l ∈ RCl−1×Cl and ν′′l ∈ RCl is initialized to zeros." }, { "heading": "3.5 Training Procedure", "text": "In this section, we describe the pipeline of AFDS for transferring knowledge from a source model to a new model by fine-tuning on target dataset. The detailed algorithm can be found in Appendix A.\nInitially, we have a pre-trained model f with parameters θ? for the source dataset (e.g. ImageNet). To ensure better accuracies on compressed target models, All ConvBN layers fl in f are extended with AFS as discussed in Section 3.4, with d initially set to 1, which means that all output channels in a convolutional layer are evaluated during inference, i.e. no acceleration. The pre-trained model is then fine-tuned on the target training dataset D with the AFD regularization proposed in Section 3.3.\nEmpirically we found that in residual networks with greater depths, AFS could become notably challenging to train to high accuracies. To mitigate this, for each output channel of a layer l we update sl according to the variance of hl(xl−1) observed on the target dataset. For each channel if the variance is smaller than a threshold δs, then we set the entry in sl to zero for that particular channel. This action replaces the output of hl(xl−1) with γl, which is a trainable parameter initialized to the mean of hl(xl−1). We compute the mean and variance statistics using Welford (1962)’s online algorithm which can efficiently compute the statistics in a single-pass with O(1) storage. In our experiments, δs is set to a value such that 50% of the channel neurons use the predictor function hl.\nMoreover, we discovered that many of the channel neurons are rarely activated in a AFSbased network. We further propose to remove the channel neurons that are activated with a low frequency. In each layer l, the mask ml is used to disable certain channels from the network by setting their output to a constant 0, if the probability of a channel neuron being active is lower than δm. Zeroed-out channels can thus be permanently removed when the model is used in inference." }, { "heading": "4 Experiments", "text": "In this section we provide an extensive empirical study of the joint methods of transfer learning and channel pruning. We evaluate the methods with 6 different benchmark datasets: Caltech-256 (Griffin et al., 2007) of 256 general object categories; Stanford Dogs 120 (Khosla et al., 2011) specializes to images containing dogs; MIT Indoors 67 (Quattoni & Torralba, 2009) for indoor scene classification; Caltech-UCSD Birds-200-2011 (CUB-200-2011) (Wah et al., 2011) for classifying birds; and Food-101 (Bossard et al., 2014) for food categories. We refer to Li et al. (2018) and Li et al. (2019), for a detailed description of the benchmark datasets. For Caltech-256, we randomly sample either 30 or 60 images from the training set for each category to produce Caltech-256-30 and -60 training datasets.\nWe use the ResNet-101 from torchvision1 pre-trained on ImageNet as the network for experiments. For ResNet-101 equipped with AFS, we start by extending the pre-trained model and replacing each batch normalization with a randomly initialized AFS, and fine-tune the resulting model on ImageNet for 90 epochs with a learning rate of 0.01 decaying by a factor of 10 every 30 epochs. The resulting model matches its original baseline accuracy.\nFor each benchmark dataset, the final FC layer of the network is replaced with a new FC randomly initialized with He et al. (2015)’s method to match the number of output categories accordingly. We then perform transfer learning with 4 different methods: L2 (fine-tuning without additional regularization), L2-SP (Li et al., 2018), learning without forgetting (LwF) (Li & Hoiem, 2018), and finally AFD for models using AFS.\nTo accelerate the resulting fine-tuned models, we continue fine-tuning the model while gradually pruning away channels used during inference. For this, we separately examine 3 pruning strategies: network slimming (NS) (Liu et al., 2017), soft filter pruning (SFP) (He et al., 2018) and finally AFS for models transfer learned with AFD. Note that NS prunes channels by sorting them globally, while SFP does so in a layer-wise manner with identical prune\n1https://pytorch.org/docs/stable/torchvision/index.html\nratios. During this procedure, we start with an unpruned model and incrementally remove 10% of the channels used in inference, i.e. preserving 90%, 80%, and etc., down to 10% of all channels for the accelerated models. At each step, we fine-tune each model using 4500 steps of SGD with a batch size of 48, at a learning rate of 0.01, before fine-tuning for a further 4500 steps at a learning rate of 0.001. AFS additionally updates the m and s masks between the two fine-tuning runs.\nFor each pruned model, we can compute the number of multiply-accumulate operations (MACs) required to perform inference on an image. For each accelerated convolution, the required number of MACs is k2HWCinCout, where Cin and Cout are the number of input and output channels that are not pruned, respectively. We compute the total number of MACs by summing up the MACs in all convolutions, residual connections, and the final pooling and FC layers. For AFS as we dynamically select which channels to evaluate during inference, we additionally add the overhead of the importance predictor layers to the number of total MACs.\nIn Figure 3, we present the trade-off relationship between the number of vs. the target dataset accuracies for Stanford Dogs and Caltech-256-60. It is clear that AFDS (ours) exceeds various combinations of pruning methods (NS, SFP) and transfer learning methods (L2, L2-SP, LwF). The results for the remaining datasets can be found in Appendix B. The trade-off curves show that AFDS minimizes accuracy degradation even if 47% of the total MACs are removed from the original model, AFDS resulted in only 1.83% drop in accuracy for the model trained on Stanford Dogs. In extreme cases where we permit only 1 10 of the original computations, our method can still manage a 70.70% accuracy, which is substantially better when compared to other pruning algorithms: NS drops to 1.33% and SFP only has 59.63%.\nTable 1 provide numerical comparisons of different pruning methods against AFS under various speed-up constraints. Table 2 similarly compares transfer learning strategies against AFD. Under most acceleration requirements, the combined method, AFDS, achieves the best accuracies on the target datasets. Finally, Table 3 compares AFDS against other literatures that performs transfer learning. AFDS can achieve state-of-the-art accuracies when compared to methods that produce models with similar number of MACs." }, { "heading": "5 Conclusion", "text": "In this paper, we introduced attentive feature distillation and selection (AFDS), a dualattention method that aims to reap the advantages of transfer learning and channel pruning methods. By applying AFDS during fine-tuning, we can not only learn a new model with a higher target task accuracy, but also further accelerates it by computing a subset of channel neurons in each convolutional layers. Under a wide range of datasets, we demonstrated the smallest drop in validation accuracies under the same speed-up constraints when compared to traditional compression methods such as network slimming (Liu et al., 2017) and soft filter pruning (He et al., 2018)." }, { "heading": "Acknowledgements", "text": "This work is supported in part by National Key R&D Program of China (No. 2019YFB2102100), Science and Technology Development Fund of Macao S.A.R (FDCT) under number 0015/2019/AKP, Shenzhen Discipline Construction Project for Urban Computing and Data Intelligence, the National Natural Science Foundation of China (Nos. 61806192, 61802387), Shenzhen Science and Technology Innovation Commission (No. JCYJ2017081853518789, JCYJ20190812160003719), the Guangdong Science and Technology Plan Guangdong-Hong Kong Cooperation Innovation Platform (No. 2018B050502009), and China’s Post-doctoral Science Fund (No. 2019M663183)." }, { "heading": "A The Overall Training Algorithm", "text": "In Algorithm 1 we illustrate the complete training procedure described above. Here, the function takes as input the target training dataset D, the source model f and its parameters θ?, the total number of steps to fine-tune S, the initial learning rate α, and the threshold hyperparameters δs and δm respectively for sl and ml. The function returns the optimized parameters θ for the target dataset, and both constant masks for all layers s = (s1, s2, . . . , sL) and m = (m1,m2, . . . ,mL). The function SGD then fine-tunes the model parameters. For each layer l, we compute the mean µl and variance σl statistics of ql(xl−1), and use it to compute sl.\nAlgorithm 1 Training Procedure\n1: function AFDS(D, f,θ?, S, α, δs, δm) 2: for l ∈ L : sl ← 1 3: for l ∈ L : ml ← 1 4: θ ← SGD(D, f,θ?, s,m, dS2 e, α,R) 5: for l ∈ L do 6: µl ← E(x,y)∼D[ql(xl−1)] 7: σ2l ← E(x,y)∼D[(ql(xl−1)− µl)2] 8: pl ← E(x,y)∼D[πl(xl−1) > 0] 9: sl ← σ2l > δs\n10: γl ← µl 11: ml ← pl > δm 12: end for 13: θ ← SGD(D, f,θ, s,m, dS2 e, α 10 ,R) 14: return θ, s,m 15: end function" }, { "heading": "B Additional Results", "text": "" } ]
2,020
Pay Attention to Features, Transfer Learn Faster CNNs
SP:d510a4587befa21d3f6b151d437e9d5272ce03a2
[ "This paper proposed BOGCN-NAS that encodes current architecture with Graph convolutional network (GCN) and uses the feature extracted from GCN as the input to perform a Bayesian regression (predicting bias and variance, See Eqn. 5-6). They use Bayesian Optimization to pick the most promising next model with Expected Improvement, train it and take its resulting accuracy/latency as an additional training sample, and repeat. ", "This paper provide a NAS algorithm using Bayesian Optimization with Graph Convolutional Network predictor. The method apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. The method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off." ]
Neural Architecture Search (NAS) has shown great potentials in finding a better neural network design than human design. Sample-based NAS is the most fundamental method aiming at exploring the search space and evaluating the most promising architecture. However, few works have focused on improving the sampling efficiency for a multi-objective NAS. Inspired by the nature of the graph structure of a neural network, we propose BOGCN-NAS, a NAS algorithm using Bayesian Optimization with Graph Convolutional Network (GCN) predictor. Specifically, we apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. For NAS-oriented tasks, we also design a weighted loss focusing on architectures with high performance. Our method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off. Extensive experiments are conducted to verify the effectiveness of our method over many competing methods, e.g. 128.4×more efficient than Random Search and 7.8×more efficient than previous SOTA LaNAS for finding the best architecture on the largest NAS dataset NASBench-101.
[]
[ { "authors": [ "Y. Akimoto", "S. Shirakawa", "N. Yoshinari", "K. Uchida", "S. Saito", "K. Nishida" ], "title": "Adaptive stochastic natural gradient method for one-shot neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "B. Baker", "O. Gupta", "N. Naik", "R. Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "C. Bishop" ], "title": "Pattern recognition and machine learning", "venue": null, "year": 2006 }, { "authors": [ "H. Cai", "T. Chen", "W. Zhang", "Y. Yu", "J. Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "H. Cai", "L. Zhu", "S. Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "L. Chen", "M. Collins", "Y. Zhu", "G. Papandreou", "B. Zoph", "F. Schroff", "H. Adam", "J. Shlens" ], "title": "Searching for efficient multi-scale architectures for dense image prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Y. Chen", "T. Yang", "X. Zhang", "G. Meng", "C. Pan", "J. Sun" ], "title": "Detnas: Neural architecture search on object detection", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "X. Chu", "B. Zhang", "H. Ma", "R. Xu", "J. Li", "Q. Li" ], "title": "Fast, accurate and lightweight super-resolution with neural architecture search", "venue": "arXiv preprint arXiv:1901.07261,", "year": 1901 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L. Li", "K. Li", "L. Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "G. Ghiasi", "T. Lin", "Q. Le" ], "title": "Nas-fpn: Learning scalable feature pyramid architecture for object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "H. Jin", "Q. Song", "X. Hu" ], "title": "Auto-keras: An efficient neural architecture search system", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "D. Jones" ], "title": "A taxonomy of global optimization methods based on response surfaces", "venue": "Journal of global optimization,", "year": 2001 }, { "authors": [ "K. Kandasamy", "W. Neiswanger", "J. Schneider", "B. Poczos", "E. Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "D. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "T. Kipf", "M. Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "X. Li", "Y. Zhou", "Z. Pan", "J. Feng" ], "title": "Partial order pruning: for best speed/accuracy trade-off in neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "C. Liu", "B. Zoph", "M. Neumann", "J. Shlens", "W. Hua", "L. Li", "L. Fei-Fei", "A. Yuille", "J. Huang", "K. Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "C. Liu", "L. Chen", "F. Schroff", "H. Adam", "W. Hua", "A. Yuille", "L. Fei-Fei" ], "title": "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "H. Liu", "K. Simonyan", "O. Vinyals", "C. Fernando", "K. Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture search", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "H. Liu", "K. Simonyan", "Y. Yang" ], "title": "Darts: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "R. Luo", "F. Tian", "T. Qin", "E. Chen", "T. Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "M. Luong", "D. Dohan", "A. Yu", "Q. Le", "B. Zoph", "V. Vasudevan" ], "title": "Exploring neural architecture search for language tasks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "M. MARCUS", "B. SANTORINI", "M. MARCINKIEWICZ" ], "title": "Building a large annotated corpus of english: the penn treebank", "venue": "Computational linguistics-Association for Computational Linguistics,", "year": 1993 }, { "authors": [ "R. Marler", "J. Arora" ], "title": "Survey of multi-objective optimization methods for engineering", "venue": "Structural and multidisciplinary optimization,", "year": 2004 }, { "authors": [ "J. Mockus", "V. Tiesis", "A. Zilinskas" ], "title": "The application of bayesian methods for seeking the extremum", "venue": "Towards global optimization,", "year": 1978 }, { "authors": [ "H. Pham", "M. Guan", "B. Zoph", "Q. Le", "J. Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "E. Real", "S. Moore", "A. Selle", "S. Saxena", "Y. Suematsu", "J. Tan", "Q. Le", "A. Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "E. Real", "A. Aggarwal", "Y. Huang", "Q. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "E. Real", "A. Aggarwal", "Y. Huang", "Q. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4780–4789,", "year": 2019 }, { "authors": [ "F. Scarselli", "M. Gori", "A. Tsoi", "M. Hagenbuchner", "G. Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "C. Sciuto", "K. Yu", "M. Jaggi", "C. Musat", "M. Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": "arXiv preprint arXiv:1902.08142,", "year": 1902 }, { "authors": [ "J. Snoek", "H. Larochelle", "R. Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "J. Snoek", "O. Rippel", "K. Swersky", "R. Kiros", "N. Satish", "N. Sundaram", "M. Patwary", "M. Prabhat", "R. Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "D. So", "Q. Le", "C. Liang" ], "title": "The evolved transformer", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "L. Wang", "S. Xie", "T. Li", "R. Fonseca", "Y. Tian" ], "title": "Sample-efficient neural architecture search by learning action space", "venue": "arXiv preprint arXiv:1906.06832,", "year": 1906 }, { "authors": [ "L. Wang", "Y. Zhao", "Y. Jinnai", "Y. Tian", "R. Fonseca" ], "title": "Alphax: exploring neural architectures with deep neural networks and monte carlo tree search", "venue": "arXiv preprint arXiv:1903.11059,", "year": 1903 }, { "authors": [ "S. Xie", "H. Zheng", "C. Liu", "L. Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "C. Ying", "A. Klein", "E. Christiansen", "E. Real", "K. Murphy", "F. Hutter" ], "title": "Nas-bench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "C. Zhang", "M. Ren", "R. Urtasun" ], "title": "Graph hypernetworks for neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "H. Zhou", "M. Yang", "J. Wang", "W. Pan" ], "title": "Bayesnas: A bayesian approach for neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "B. Zoph", "Q. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "B. Zoph", "V. Vasudevan", "J. Shlens", "Q. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Li" ], "title": "ResNet style search space. This search space aims to find when to perform down-sampling and when to double the channels. The ResNet style backbone consists of 5 stages with different resolutions from input images. The spatial size of Stage 1 to 5 is gradually down-sampled by the factor of 2. As suggested in Li et al. (2019), we fixed one 3 × 3 convolution layer (stride = 2) in Stage-1 and the beginning of Stage-2", "venue": null, "year": 2019 }, { "authors": [ "Cai" ], "title": "labor of designing networks. Particularly, in real applications, the objective of NAS is more preferred to be obtaining a decent accuracy under a limited computational budget. Thus a multi-objective NAS is a more practical setting than only focusing on accuracy. There are several approaches in NAS area: 1) Reinforcement learning-based algorithm", "venue": "Baker et al", "year": 2016 }, { "authors": [ "Real" ], "title": "2018b); Real et al. (2019a) try to evolve architectures or Network Morphism by mutating the current best architectures and explore new potential models; 3) Gradient-based algorithm Liu et al", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently Neural Architecture Search (NAS) has aroused a surge of interest by its potentials of freeing the researchers from tedious and time-consuming architecture tuning for each new task and dataset. Specifically, NAS has already shown some competitive results comparing with hand-crafted architectures in computer vision: classification (Real et al., 2019b), detection, segmentation (Ghiasi et al., 2019; Chen et al., 2019; Liu et al., 2019a) and super-resolution (Chu et al., 2019). Meanwhile, NAS has also achieved remarkable results in natural language processing tasks (Luong et al., 2018; So et al., 2019).\nA variety of search strategies have been proposed, which may be categorized into two groups: oneshot NAS algorithms (Liu et al., 2019b; Pham et al., 2018; Luo et al., 2018), and sample-based algorithms (Zoph & Le, 2017; Liu et al., 2018a; Real et al., 2019b). One-shot NAS algorithms embed the architecture searching process into the training stage by using weight sharing, continuous relaxation or network morphisms. However, those methods cannot guarantee the optimal performance of the final model due to those approximation tricks and is usually sensitive to the initial seeds (Sciuto et al., 2019). On the other hand, sample-based algorithms are relatively slower but reliable. They explore and exploit the search space using some general search algorithms by providing potential candidates with higher accuracy. However, it requires fully training of huge amounts of candidate models.\nTypically, the focus of most existing NAS methods has been on the accuracy of the final searched model alone, ignoring the cost spent in the search phase. Thus, the comparison between existing search algorithms for NAS is very difficult. (Wang et al., 2019b) gives us an example of evaluating the NAS algorithms from this view. They compare the number of training architectures sampled until finding the global optimal architecture with the top accuracy in the NAS datasets. Besides accuracy, in real applications, there are many other objectives we should concern, such as speed/accuracy\ntrade-off. Hence, in this paper, we aim at designing an efficient multi-objective NAS algorithm to adaptively explore the search space and capture the structural information of architectures related to the performance.\nThe common issue faced by this problem is that optimizing objective functions is computationally expensive and the search space always contains billions of architectures. To tackle this problem, we present BOGCN-NAS, a NAS algorithm that utilizes Bayesian Optimization (BO) together with Graph Convolutional Network (GCN). BO is an efficient algorithm for finding the global optimum of costly black-box function (Mockus et al., 1978). In our method, we replace the popular Gaussian Processes model with a proposed GCN model as the surrogate function for BO (Jones, 2001). We have found that GCN can generalize fairly well with just a few architecture-accuracy pairs as its training set. As BO balances exploration and exploitation during searching and GCN extracts embeddings that can well represent model architectures, BOGCN-NAS is able to obtain the optimal model architecture with only a few samples from the search space. Thus, our method is more resource-efficient than the previous ones. Graph neural network has been proposed in previous work for predicting the parameters of the architecture using a graph hypernetwork (Zhang et al., 2019). However, it’s still a one-shot NAS method and thus cannot ensure the performance of the final found model. In contrast, we use graph embedding to predict the performance directly and can guarantee performance as well.\nThe proposed BOGCN-NAS outperforms current state-of-the-art searching methods, including Evolution (Real et al., 2019b), MCTS (Wang et al., 2019b), LaNAS (Wang et al., 2019a). We observe consistent gains on multiple search space for CV and NLP tasks, i.e., NASBench-101 (denoted NASBench) (Ying et al., 2019) and LSTM-12K (toy dataset). In particular, our method BOGCN-NAS is 128.4×more efficient than Random Search and 7.8×more efficient than previous SOTA LaNAS on NASBench (Wang et al., 2019a). We apply our method to multi-objective NAS further, considering adding more search objectives including accuracy and number of parameters. Our method can find more superior Pareto front on NASBench. Our algorithm is applied on open domain search with NASNet search space and ResNet Style search space, which finds competitive models in both scenarios. The results of experiment demonstrate our proposed algorithm can find a more competitive Pareto front compared with other sample-based methods." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 BAYESIAN OPTIMIZATION", "text": "Bayesian Optimization aims to find the global optimal over a compact subset X (here we consider maximization problem):\nx∗ = arg max x∈X f(x). (1)\nBayesian Optimization considers prior belief about objective function and updates posterior probability with online sampling. Gaussian Processes (GPs) is widely used as a surrogate model to approximate the objective function (Jones, 2001). And Expected Improvement acquisition function is often adopted (Mockus et al., 1978). For the hyperparameters of the surrogate model Θ, we define\nγ(x) = µ(x;D,Θ)− f(xbest)\nσ(x;D,Θ) , (2)\nwhere µ(x;D,Θ) is the predictive mean, σ2(x;D,Θ) is the predictive variance and f(xbest) is the maximal value observed. The Expected Improvement (EI) criterion is defined as follows.\naEI(x;D,Θ) = σ(x;D,Θ)[γ(x)Φ(γ(x); 0, 1) +N (γ(x); 0, 1)], (3) where N (·; 0, 1) is the probability density function of a standard normal and Φ(·; 0, 1) is its cumulative distribution." }, { "heading": "2.2 MULTI-OBJECTIVE OPTIMIZATION", "text": "Without loss of generality about max or min, given a search space X and m ≥ 1 objectives f1 : X → R, . . ., fm : X → R, variable X1 ∈ X dominates variable X2 ∈ X (denoted X1 X2) if (i) fi(X1) ≥ fi(X2),∀i ∈ {1, . . . ,m}; and (ii) fj(X1) > fj(X2) for at least one j ∈ {1, . . . ,m}. X∗\nis Pareto optimal if there is no X ∈ X that domaines X∗. The set of all Pareto optimal architectures consitutes the Pareto front Pf . A multi-objective optimization problem (MOP) aims at finding such input X ∈ X that X cannot be dominated by any variable in X (Marler & Arora, 2004)." }, { "heading": "2.3 GRAPH CONVOLUTIONAL NETWORK", "text": "Let the graph be G = (V,E), where V is a set of N nodes, and E is the set of edges. Let its adjacency matrix be A and feature matrix be X . The graph convolutional network (GCN) is a learning model for graph-structure data (Kipf & Welling, 2016). For a L-layer GCN, the layer-wise propagation rule is given by:\nH(l+1) = f(H(l), A) = ReLU(D̃ 1 2 ÃD̃− 1 2H(l)W (l)), (4)\nwhere à = A + I , I is the identity matrix, D̃ is a diagonal matrix with D̃ii = ∑N j=1Aij , H (l) and W (l) are the feature map and weight at the l-th layer respectively, and ReLU(·) is the ReLU activation function. H(0) is the original feature matrix X , and H(L) is the graph embedding matrix." }, { "heading": "3 BOGCN-NAS", "text": "To search for the optimal architecture more efficiently, we proposed BOGCN-NAS by using predictive network performance Optimization with the GCN (Section 3.2) while utilizing the Bayesian Optimization. Figure 1 shows the overview of the proposed algorithm." }, { "heading": "3.1 MULTI-OBJECTIVE NAS", "text": "We formulate NAS problem as a multi-objective optimization problem over the architecture search space A where objective functions can be accuracy, latency, number of parameters, etc. We aim to find architectures on the Pareto front of A. Specifically, when m = 1, it reduces to single-objective (usually accuracy) NAS and the corresponding Pareto front reduces to one optimal architecture." }, { "heading": "3.2 GCN PREDICTOR", "text": "GCN predictor predicts the performance (like accuracy) of an architecture. Compared with MLP and LSTM predictors proposed before (Wang et al., 2019b), GCN can preserve the context of graph data better. Another important characteristic of GCN is its ability to handle variable number of nodes, while an MLP cannot take a larger architecture as the input. Even though the LSTM can handle variable-length sequences, its performance is not competitive because of the flat string encoding.\nA neural network can be viewed as a directed attributed graph, in which each node represents an operation (such as convolution operation) and each edge represents a data flow. As a concrete illustration, we use the architectures in the NASBench dataset (Ying et al., 2019) as an example. The idea can be easily extended to other architectures. In NASBench, each architecture is constituted by stacking multiple repeated cells. Thus, we will focus on searching the cell architecture. An example cell in NASBench is illustrated on the left side of Figure 1, where “input” represents the input of the cell, “output” represents the output of the cell, “1 × 1 Conv, 3 × 3 Conv, Max Pooling” are three different operations (5 operations totally).\nWe propose to encode the cell into an adjacency matrix A (asymmetric) and a feature matrix X , as the input of our GCN predictor. Note that the vanilla GCN only extracts the node embeddings, while we want to obtain graph embedding. Following (Scarselli et al., 2008), we add a global node to the original graph of the cell and let every node point at the global node. The adjacency matrix can be obtained directly from the graph structure. For the feature matrix, we use the one-hot coding scheme for each operation. Besides the original 5 different operations defined in NASBench, we add another operation (global node) into coding scheme.\nWe feed A and X to a multi-layer GCN model to obtain the embedding of every node H(L) by (Eq. 4). For high-level prediction, we leave original nodes out and take the embedding of global node solely because it already has the overall context of the architecture. Followed by one fully-connected layer with sigmoid activation function, we can get the predicted accuracy. In training phase, we use MSE loss for regression optimization." }, { "heading": "3.3 INCORPORATING BO INTO GCN", "text": "Bayesian Optimization is an efficient model for search and optimization problems, which considers balancing both exploitation and exploration balanced. It depends on updating the posterior distribution with the samples drawn from the search space based on one cheap surrogate model.\nGPs is one of the most popular choices because Gaussian distribution is self-conjugate such that a posterior distribution is also the same form as the prior. (Kandasamy et al., 2018) and (Jin et al., 2019) both define the heuristic distance between neural architectures and use GPs with defined distence for searching. However, since the computation increases cubically with the number of samples (Snoek et al., 2015), GPs is so costly for NAS problem, whose search space is always huge. Another drawback for GPs is that it cannot handle graph-data directly without a special encoding scheme. In this work, we replace the popular surrogate model with our GCN predictor and take the uncertainty into consideration.\nInspired by previous work (Snoek et al., 2015), we train the GCN predictor first with the trained architecture set D containing architectures {(Ai, Xi)}ni=1 with their actual performances {ti}ni=1,, then during searching, we replace the last fully connected layer with Bayesian Linear Regressor (BLR) for Bayesian estimation and retain GCN related layers for point estimation. We only consider the uncertainty of the weights on the last fully-connected layer. We denote the embedding function of global node by φ(·, ·) = [φ1(·, ·), φ2(·, ·), ..., φd(·, ·)]T . We can get the embedding of every architecture φ(Ai, Xi) from the trained architecture set D and treat them as the basis functions for BO. For clarifying, we define Φ to be the design matrix where Φij = φj(Ai, Xi).\nDifferent from typical Bayesian Linear Regression (BLR) (Bishop, 2006), the final layer of our GCN predictor contains non-linear activation function. Here we use the inverse function trick to avoid the non-trivial variant. Without fitting the true accuracy t, we prefer to estimate the value before the activation function logit(t) such that we can convert non-linear regression to linear regression problem. The key of BO is the order of the acquisition function over all sampled architectures rather than true values. Due to the monotonicity of sigmoid function, the order property still holds.\nThe predicted mean and variance given by our model without the last activation function are shown below.\nµ(A,X;D, α, β) = mTNφ(A,X), (5)\nσ2(A,X;D, α, β) = 1 β + φ(A,X)TSNφ(A,X), (6)\nwhere\nmN = βSNΦ T logit(t), SN = αI + βΦ T Φ. (7)\nHere, α, β are precisions of the prior, which are hyperparameters of the Bayesian Optimization model. We can estimate them by maximizing the log marginal likelihood as following (Snoek et al., 2012).\nlog p(logit(t)|α, β) = M 2 logα+ N 2 log β − β 2 ||logit(t)− ΦmN ||2\n− α 2 mTNmN − 1 2 log |S−1N | − N 2 log 2π. (8)" }, { "heading": "3.4 SEARCH WITH ALTERNATE LEARNING", "text": "NAS is a process of online learning during which we can utilize new fully-trained architectures sampled from the search space. Therefore, the GCN model and BLR in BOGCN-NAS should be updated with the increasing number of samples for better generalization. In this work, for the reason that GCN retraining is more expensive than BLR updating, we update BLR more frequently than GCN predictor.\nThe algorithm of our proposed BOGCN-NAS is illustrated in Algorithm 1. Given the search space A, we initialize trained architecture sets U containing architectures (Ai, Xi) with their performance ti = {f1i, . . . , fmi}. We train GCN predictor firstly with U and replace the last fully-connected layer with BLR described in Section 3.3. Then we randomly sample a subspaceR as following candidate pool if A is such huge that we cannot cover every architecture. After obtaining the candidate pool, we can calculate every candidate model’s Expected Improvement as their estimated objective values t̂j = {f̂1j , . . . , ˆfmj}. Based on t̂j and multi-objective formulation (Section 3.1), we can generate a estimated Pareto Front and sample estimated Pareto optimal models as set S and fullytrain them to obtain the true objective values tj . The trained architecture sets is then updated by U = U ∪ S. Accumulating a certain amount of new data, we update GCN and BLR alternately for next periodic sampling until satisfying the stop criteria." }, { "heading": "3.5 EXPONENTIAL WEIGHTED LOSS", "text": "For GCN predictor alone, MSE loss can achieve competitive performance for fitting all data. However, when it comes to the surrogate function for finding the top-performance model, we should pay more attention to architectures with high performance compared to others.\nIn the search phase of BO, we select samples with top values of acquisition function. Specifically, we expect to predict architectures with high performance as accurately as possible, while distinguishing low-performance architecture is sufficient for our purpose. For adapting NAS task, we propose Exponential Weighted Loss for the surrogate GCN model and replace common MSE loss with weighted loss completely in Algorithm 1.\nLexp = 1\nN(e− 1) N∑ i=1 (exp(ỹi)− 1)||yi − ỹi||2, (9)\nAlgorithm 1 BOGCN-NAS Search Procedure. A is the given search space, l is the number of samples every iteration, k is the ratio of GCN/BLR update times, Pf is the optimal Pareto front and threshold is the criteria for search stopping.\n1: Initialize trained architecture sets U = {Ai, Xi, ti}ni=1 from A and current Pareto front P̃f ; 2: Train GCN and BLR initially; 3: while |P̃f ∩ Pf |/|Pf | < threshold do 4: for iteration= 1, 2, ..., k do 5: R =random sample(A); 6: S = sample(GCN,BLR, l,R); 7: Fully train sampled models S; 8: Update U = U ∪ S; 9: Update P̃f in set U ;\n10: BLR.update(GCN,U); 11: end for 12: GCN.retrain(U); 13: end while\nreturn P̃f\nfunction sample(GCN,BLR, l,R) 14:5 embeddings = GCN(A); 16: Predict mean and variance of models in R with BLR and embeddings by (Eq. 5) and (Eq. 6); 17: Calculate corresponding Expected Improvement by (Eq. 3); 18: Sample top l Pareto optimal models sorted by Expected Improvement into set S; 19: return S. 20: end function\nhere yi is the predicted accuracy, ỹi is the ground truth and e − 1 is normalization factor (e is the base of the natural logarithm). Thus, our predictor will focus on predicting for those models with higher accuracy." }, { "heading": "4 EXPERIMENT", "text": "Dataset and Search Space In this section, we validate the performance of the proposed BOGCNNAS on NASBench (Ying et al., 2019). This is the largest benchmark NAS dataset for computer vision task currently, with 420K architectures. To show the generalization of our method, we collect 12K LSTM models trained on PTB dataset (MARCUS et al., 1993) aiming at NLP task. We also compare our method in open domains with NASNet search space (Zoph et al., 2018) and ResNet style search space (Li et al., 2019). We will provide the detail of the search space and collection method in Appendices Section.\nSpecifically, we prove the performance of single proposed GCN Predictor firstly and then validate the whole framework on single-objective/multi-objective search problems compared with other baselines (with default settings). The results of the experiment illustrate the efficiency of BOGCNNAS." }, { "heading": "4.1 COMPARISON OF INDIVIDUAL PREDICTORS", "text": "We use a four-layer GCN, with 64 hidden units in each layer, as the surrogate model for BO. During training, we use the Adam optimizer (Kingma & Ba, 2014), with a learning rate of 0.001 and a minibatch size of 128. The proposed GCN predictor is compared with an MLP predictor and LSTM predictor (Liu et al., 2018a; Wang et al., 2019b). Here we apply MSE loss because we compare the stand-alone predictors, while we replace with Exponential Weighted Loss for subsequent NAS problems. For the evaluation metric, we follow (Wang et al., 2019b) and use the correlation between the predicted accuracy and true accuracy. One difference between our evaluation method and (Wang et al., 2019b) is that we compare predictor’s performance training on fewer architectures rather than whole search space, because NAS start with little training data and we cannot train such a large number of architectures in practice. We used 1000 architectures in NASBench for training, 100 architectures for validation, and 10000 architectures for testing.\nTable 1 shows the correlation result of various predictors and the number of predictors’ parameters. The prediction result of three predictors is also illustrated in Figure 2 and the value means the\nGaussian kernel-density estimation. As can be seen, the GCN predictor can predict the performance of architectures more accurately than the other two predictors. In addition, the GCN predictor has fewer parameters compared with them." }, { "heading": "4.2 SINGLE-OBJECTIVE SEARCH", "text": "Single-objective (accuracy) search is a special case of multi-objective search. For the proposed BOGCN-NAS, we randomly sample 50 architectures to fully train and use them as the initial trained architecture sets, which is counted into the total sample numbers. Since the whole NASBench and LSTM datasets can be inferred easily (less than 0.01s), we set R = A in the experiment, which means we take total search space as the candidate pool. During the search phase, we use GCN to obtain embeddings and use the Bayesian regressor to compute EI scores for all architectures in the search domain, rank them based on the score and select the top ten models to fully train, obtain the accuracies and add them to trained architecture sets, update the best accuracy observed. The process is repeated for k = 10 times. The GCN predictor is then retrained with our updated trained architecture sets. This is repeated until the target model is found over 50 rounds. Note that GCN model is trained using Exponential Weighted Loss instead in NAS procedure.\nBOGCN-NAS is compared with the following state-of-the-art sample-based NAS baselines: (i) Random Search, which explores the search space without performing exploitation; (ii) Regularized Evolution (Real et al., 2019b), which uses heuristic evolution process for exploitation but is still constrained by the available human prior knowledge; (iii) Monte Carlo tree search (MCTS) (Wang et al., 2019b); and (iv) LaNAS (Wang et al., 2019a). Both MCTS and LaNAS only estimate the performance of models in a coarse subspace, and then select models randomly in that subspace. In contrast, Bayesian optimization conducts a more fine-grained search and predicts the expected improvement of candidate pool.\nFigure 3 and Table 2 show the number of samples until finding the global optimal architecture using different methods. The proposed algorithm outperforms the other baselines consistently on the two different datasets. On NASBench, BOGCN-NAS is 128.4×, 59.6×, 50.5×, 7.8× more sample-efficient than Random Search, Regularized Evolution, MCTS and LaNAS respectively. On the smaller NLP dataset, BOGCN-NAS can still search and find the optimal architecture with fewer samples.\nWe predict all architectures (420K; 12K) together on NASBench and LSTM dataset because the cost of inference all once is negligible. For larger search space, we can use sampling methods as mentioned in Section 3.4 instead. In every iteration, we randomly sample a subset architectures R as candidate pool from the search space A for performance prediction and select top models among this pool. The performance of BOGCN versus the pool sampling ratio (|R|/|A| ) is shown in Table 3. As can be seen, the overall performance of our BOGCN can still find the optimal model with less samples compared with other baselines. Even though random sampling is good enough, an alternative sampling method can be Evolutionary Algorithm." }, { "heading": "4.3 MULTI-OBJECTIVE SEARCH", "text": "In this section, we focus on multi-objective (accuracy and number of parameters) search. As the same settings of BOGCN-NAS with Section 4.2, the only difference is the criteria for updating the best architecture observed. Here we extend baselines in Section 4.2 to multi-objective form for comparison. In detail, we sample 2, 000 architectures in total and compare the found Pareto front P̃f with the optimal Pareto front Pf on NASBench dataset.\nAs shown in Figure 4a, the grey dots are the overall architectures of NASBench, the red dots are samples selected by BOGCN and the blue dots are architectures undominated by our selected samples. Based on them, we can make sure the respective Pareto fronts - the green dashed line is the optimal Pareto front and the red dashed line is the estimated Pareto front. Figure 4b shows the Pareto fronts estimated by different algorithms, which demonstrates the superiority of BOGCN mothod. Compared with other baselines, the models sampled by our method are gathered near the optimal Pareto front, and the found Pareto front is also closer to the optimal Pareto front. This validates the efficiency of our algorithm on this multi-objective search task." }, { "heading": "4.4 OPEN DOMAIN SEARCH", "text": "In this section, we validate the propose method on the open domains - NASNet search space and ResNet style search space (described in Appendices Section). Since the size of open domain is\nenormous, we set the size of R equals to 1M every iteration for both search spaces. And the found architectures by BOGCN-NAS can be found in Appendices Section." }, { "heading": "4.4.1 NASNET SEARCH SPACE", "text": "For NASNet search space, we consider single-objective (accuracy) search on Cifar-10. For efficiency, we train the sampled architectures by early stopping at 100 epochs rather than fully training. It’s remarkable that we stop the algorithm after a certain number of samples |S| rather than until finding the optimal architecture beacuse we don’t know the optimal architecture and the open domain contains billions of architectures. And other experiment settings are the same with Section 4.2 as well.\nWe pick two best performing architectures V1 and V2 within 200 and 400 samples respectively and fully train them. Table 2 compares our results with other baselines on CIFAR-10. As can be seen, even though one-shot NAS methods don’t need any architecture evaluated directly, the performance of the final found models is not as good as sample-based methods averagely. Compared with other sample-based NAS method, BOGCN outperforms all methods except for AmoebaNet-B, which costs 67.5× more evaluating samples." }, { "heading": "4.4.2 RESNET STYLE SEARCH SPACE", "text": "For ResNet style search space, we validate the proposed method for multi-objective search on ImageNet (Deng et al., 2009). Here we consider classification accuracy and the number of parameters of the model at the same time. Due to the large volume of the dataset, we train the sampled architectures by early stopping at 40 epochs rather than fully training. And other experiment settings are the same with Section 4.3 as well.\nThe accuracy and number of parameters of the sampled model are illustrated in Figure 5. Compared to random sampling, BOGCN-NAS achieves a more competitive Pareto front. We fully train every model on the estimated Pareto front and pick three models (M1, M2, M3), which can dominate ResNets. The comparison of our found models and famous ResNet family models are shown in Figure 5b. It shows that ResNets can be dominated by our found models seriously." }, { "heading": "4.5 SEARCH SPCAE EXTENSION TRANSFER", "text": "In this section, we discover the ability of our proposed algorithm on search spcae extension transfer. Most NAS algorithms only focus on static search space. In contrast, how to adapt the methods for extension of search space is still an open problem. For instance, after searching in one small search space A1, how to transfer the obtained knowledge to a larger search space A2. For validation, we split NASBench into two sub-datasets: architectures with 6 nodes (62K) and architectures with 7 nodes (359K).\nUsing the same settings with Section 4.2, we pretrained our GCN model on architectures with 6 nodes and then transfer the GCN predictor to searching on the architecture domain with 7 nodes. For comparison, we run the same algorithm without pretraining. The search method with pretrained GCN predictor finds optimal model after 511.9 samples while the method without pretraining needs 1386.4 samples. As can be seen, pretraining can reach the optimal model 2.7× more efficiently than unpretrained, validating the transfer ability of GCN predictor. Thus, BOGCN-NAS can handle different-scale architectures as long as their operation choices are the same." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose BOGCN-NAS, a multi-objective NAS method using Bayesian Optimization with Graph Convolutional Network predictor. We formulate the problem as a multi-objective optimization problem and utilize the efficiency of BO to search top-performance architectures. Instead of using popular Gaussian Processes surrogate model, we replace it with proposed GCN predictor such that graph-based structure can be preserved better. For NAS-specific tasks, we also propose weighted loss focusing on top-performance models. Experimental results show that the proposed algorithm outperforms the SOTA LaNAS on single-objective NAS and validate its efficiency on multi-objective NAS as well." }, { "heading": "A ABLATION STUDY", "text": "A.1 THE ADVANTAGE OF GCN PREDICTOR AND BAYESIAN OPTIMIZATION\nTo verify that GCN is a superior choice of predictor against others (e.g. MLP, RNN), we replaced GCN in the search algorithm with MLP predictor. And for demonstrating that Bayesian Optimization indeed improves the performance of our search algorithm, we also remove BLR and use point estimation only. In other words, we select candidate models based only on GCN’s predictive accuracies of architectures. Therefore, there are four different models correspondingly: (1) MLP; (2) BOMLP; (3) GCN; (4) BOGCN, where MLP, GCN use predictor only for model selection and BOMLP, BOGCN apply Bayesian Optimization on the strength of respective predictor.\nWith other settings same with with Section 4.2, we perform single-objective NAS algorithms over 50 rounds and the results of experiments on NASBench and LSTM datasets are shown in Figure 6. As can be seen, GCN is able to discover the optimal architecture with fewer samples than MLP, which proves the superiority of GCN predictor and our algorithm is more efficient with Bayesian Optimization.\nA.2 THE IMPROVEMENT OF EXPONENTIAL WEIGHTED LOSS\nTo prove the improvement of our proposed weighted loss empirically, we compare it with MSE loss. We also design other two weighted losses as following to show the influence of the second-order derivative of added weight. We apply the same settings with Section 4.2 for algorithm performing and the experiment result of single-objective (accuracy) on NASBench is shown in Figure 7. As can be seen, exponential weighted loss outperforms other three losses, which is consistent with our intuition.\nLlog = 1\nN log 2 N∑ i=1 log(ỹi + 1)||yi − ỹi||2, (10)\nLlinear = 1\nN N∑ i=1 ỹi||yi − ỹi||2. (11)" }, { "heading": "B SUPPLEMENTAL EXPERIMENTS", "text": "B.1 TIME COURSE PERFORMANCE COMPARISON\nFollowing (Wang et al., 2019b;a), besides comparing the number of training architectures sampled until finding the global optimal architecture, we also evaluate the current best models during the searching process. As shown in Figure 8, our proposed BOGCN-NAS outperform other search algorithms except for the very beginning on LSTM model searching.\nB.2 PREDICTOR TRAINING ON WHOLE SEARCH SPACE\nFor comparison with previous work (Wang et al., 2019b), we also train our GCN predictor on whole NASBench dataset (420K models) (Ying et al., 2019). We use 85% NASBench for training, 10% for validation and remaining 5% for testing. As shown in Figure 9 (the value also means the density of architectures around), GCN outperforms others consistently with experiment training on fewer data (Section 4.1). Even though the correlations of the GCN and MLP are comparable here, the performance is less important than cases training on fewer data." }, { "heading": "C DATASET AND SEARCH SPACE", "text": "C.1 NASBENCH ENCODING\nC.2 LSTM-12K DATASET\nTo create LSTM model dataset, we follow the same setting proposed by ENAS (Pham et al., 2018), we use adjacency matrix and a list of operators to represent an LSTM cell, and randomly sampled 12K cell structures from that domain, because the cells have natural graph structures, it is easy to feed them directly into GCN and conduct training. Due to the limitation on computational resources, we only sample architectures with number of nodes less than or equal to 8, and trained each cell for 10 epochs on PTB dataset (MARCUS et al., 1993). We use perplexity as a performance measure for the cells.\nC.3 NASNET SEARCH SPACE\nWe follow the search space setting of DARTS (Liu et al., 2019b), in which the architecture is stacked by the learned cell. The cell consists of 4 blocks, two inputs (the output of previous cell and the previous previous cell) and one output. There are 8 types operations allowed: 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, identity, and zero. Similar to previous work (Liu et al., 2018a), we apply the same\ncell architecture for both “normal” and “reduction” layer. For adapting to proposed GCN predictor, we regard the operation as the node and regard the data flow as the edge. The encoding examples are illustrated in Appendices E.\nC.4 RESNET STYLE SEARCH SPACE\nWe follow the setting of Li et al. (2019) to prepare the ResNet style search space. This search space aims to find when to perform down-sampling and when to double the channels. The ResNet style backbone consists of 5 stages with different resolutions from input images. The spatial size of Stage 1 to 5 is gradually down-sampled by the factor of 2. As suggested in Li et al. (2019), we fixed one 3 × 3 convolution layer (stride = 2) in Stage-1 and the beginning of Stage-2. We use the block setting as bottleneck residual block in ResNet. Then, the backbone architecture encoding string looks like “1211 − 211 − 1111 − 12111”, where “−” separates each stage with different resolution, “1” means regular block with no change of channels and “2” indicated the number of base channels is doubled in this block. The base channel size is 64. In Section 4.4, we just take “−, 1, 2” as three different operations and encode architectures as a series of strings. We train the model generated from this search space for 40 epochs with a fast convergence learning rate schedule. Each architecture can be evaluated in 4 hours on one server with 8 V100 GPU machines." }, { "heading": "D NAS RELATED WORK", "text": "NAS aims at automatically finding a neural network architecture for a certain task such as CV and NPL (Chen et al., 2018; Liu et al., 2019a; Chen et al., 2019) and different datasets without human’s labor of designing networks. Particularly, in real applications, the objective of NAS is more preferred to be obtaining a decent accuracy under a limited computational budget. Thus a multi-objective NAS is a more practical setting than only focusing on accuracy. There are several approaches in NAS area: 1) Reinforcement learning-based algorithm. Baker et al. (2016); Zoph & Le (2017); Cai et al. (2018) train an RNN policy controller to generate a sequence of actions to design cell structure for a specify CNN architecture; 2) Evolution-based algorithm. Real et al. (2017); Liu et al. (2018b); Real et al. (2019a) try to evolve architectures or Network Morphism by mutating the current best architectures and explore new potential models; 3) Gradient-based algorithm Liu et al. (2019b); Cai et al. (2019); Xie et al. (2019) define an architecture parameter for continuous relaxation of the discrete search space, thus allowing differentiable optimization of the architecture. 4) Bayesian Optimization-based algorithm. (Kandasamy et al., 2018) and (Jin et al., 2019) define the heuristic distances between architectures and apply BO with Gaussian Processes. Among those algorithms, most existing methods focus on a single objective (accuracy), others adding computation constraints as a regularization loss in the gradient-based method or as a reward in the RL-based algorithm. In contrast, our method reformulates the multi-objective NAS as a non-dominate sorting problem and further enables an efficient search over flexible customized search space." }, { "heading": "E BEST FOUND MODELS", "text": "E.1 NASNET SEARCH SPACE\nFigure 11a and 11c show the found architectures from the open domain search on NASNet search space by our method.\nE.2 RESNET STYLE SEARCH SPACE\nFigure 12, 13 and 14 show the found architectures from the open domain search on ResNet style search space by our method." } ]
2,019
null
SP:f719db5d0209fd670518cf1e28a66dfcd9de0a8c
[ "Augments the loss of video generation systems with a discriminator that considers multiple frames (as opposed to single frames independently) and a new objective termed ping-pong loss which is introduced in order to deal with “artifacts” that appear in video generation. The paper also proposes a few automatic metrics with which to compare systems. Although the performance does not convincingly exceed its competitors, the contribution seems to be getting the spatio-temporal adversarial loss to work at all.", "The paper presents a novel method for training video-to-video translation (vid2vid) models. The authors introduce a spatio-temporal adversarial discriminator for GAN training, that shows significant benefits over prior methods, in particular, parallel (as opposed to joint) spatial and temporal discriminators. In addition the authors introduce a self-supervised objective based on cycle dependency that is crucial for producing temporally consistent videos. A new set of metrics is introduced to validate the claims of the authors." ]
We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks , e.g. video super-resolution and unpaired video translation. For the former, state-of-the-art methods often favor simpler norm losses such as L over adversarial training. However, their averaging nature easily leads to temporally smooth results with an undesirable lack of spatial detail. For unpaired video translation, existing approaches modify the generator networks to form spatio-temporal cycle consistencies. In contrast, we focus on improving the learning objectives, and propose a temporally self-supervised algorithm. For both tasks, we show that temporal adversarial learning is key to achieving temporally coherent solutions without sacrificing spatial detail. We also propose a novel Ping-Pong loss to improve the long-term temporal consistency. It effectively prevents recurrent networks from accumulating artifacts temporally without depressing detailed features. We also propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution. A series of user studies confirms the rankings computed with these metrics.
[]
[ { "authors": [ "Aayush Bansal", "Shugao Ma", "Deva Ramanan", "Yaser Sheikh" ], "title": "Recycle-gan: Unsupervised video retargeting", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yochai Blau", "Tomer Michaeli" ], "title": "The perception-distortion tradeoff", "venue": "In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA,", "year": 2018 }, { "authors": [ "Ralph Allan Bradley", "Milton E Terry" ], "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons", "venue": "Biometrika, 39(3/4):324–345,", "year": 1952 }, { "authors": [ "Dongdong Chen", "Jing Liao", "Lu Yuan", "Nenghai Yu", "Gang Hua" ], "title": "Coherent online video style transfer", "venue": "In Proc. Intl. Conf. Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Yang Chen", "Yingwei Pan", "Ting Yao", "Xinmei Tian", "Tao Mei" ], "title": "Mocycle-gan: Unpaired video-to-video translation", "venue": "arXiv preprint arXiv:1908.09514,", "year": 2019 }, { "authors": [ "M-L Eckert", "Wolfgang Heidrich", "Nils Thuerey" ], "title": "Coupled fluid density and motion from single views", "venue": "In Computer Graphics Forum,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Younghyun Jo", "Seoung Wug Oh", "Jaeyeon Kang", "Seon Joo Kim" ], "title": "Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and superresolution", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Jiwon Kim", "Jung Kwon Lee", "Kyoung Mu Lee" ], "title": "Accurate image super-resolution using very deep convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Wei-Sheng Lai", "Jia-Bin Huang", "Narendra Ahuja", "Ming-Hsuan Yang" ], "title": "Deep laplacian pyramid networks for fast and accurate superresolution", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": null, "year": 2016 }, { "authors": [ "Renjie Liao", "Xin Tao", "Ruiyu Li", "Ziyang Ma", "Jiaya Jia" ], "title": "Video super-resolution via deep draft-ensemble learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Ce Liu", "Deqing Sun" ], "title": "A bayesian approach to adaptive video super resolution", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2011 }, { "authors": [ "Ding Liu", "Zhaowen Wang", "Yuchen Fan", "Xianming Liu", "Zhangyang Wang", "Shiyu Chang", "Thomas Huang" ], "title": "Robust video super-resolution with learned temporal dynamics", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Bruce D. Lucas", "Takeo Kanade" ], "title": "An iterative image registration technique with an application to stereo vision (darpa)", "venue": "In Proceedings of the 1981 DARPA Image Understanding Workshop,", "year": 1981 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Kwanyong Park", "Sanghyun Woo", "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Preserving semantic and temporal consistency for unpaired video-to-video translation", "venue": "arXiv preprint arXiv:1908.07683,", "year": 2019 }, { "authors": [ "Eduardo Pérez-Pellitero", "Mehdi SM Sajjadi", "Michael Hirsch", "Bernhard Schölkopf" ], "title": "Photorealistic video super resolution", "venue": "arXiv preprint arXiv:1807.07930,", "year": 2018 }, { "authors": [ "Ekta Prashnani", "Hong Cai", "Yasamin Mostofi", "Pradeep Sen" ], "title": "PieAPP: Perceptual Image-Error Assessment through Pairwise Preference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Manuel Ruder", "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Artistic style transfer for videos", "venue": "In German Conference on Pattern Recognition,", "year": 2016 }, { "authors": [ "Mehdi SM Sajjadi", "Bernhard Schölkopf", "Michael Hirsch" ], "title": "Enhancenet: Single image super-resolution through automated texture synthesis", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Mehdi SM Sajjadi", "Raviteja Vemulapalli", "Matthew Brown" ], "title": "Frame-recurrent video super-resolution", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Xin Tao", "Hongyun Gao", "Renjie Liao", "Jue Wang", "Jiaya Jia" ], "title": "Detail-revealing deep video super-resolution", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Kiwon Um", "Xiangyu Hu", "Nils Thuerey" ], "title": "Perceptual evaluation of liquid simulation methods", "venue": "ACM Transactions on Graphics (TOG),", "year": 2017 }, { "authors": [ "Chaoyue Wang", "Chang Xu", "Chaohui Wang", "Dacheng Tao" ], "title": "Perceptual adversarial networks for image-toimage transformation", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Guilin Liu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Video-to-video synthesis", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Allan Jabri", "Alexei A Efros" ], "title": "Learning correspondence from the cycle-consistency of time", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "You Xie", "Erik Franz", "Mengyu Chu", "Nils" ], "title": "Thuerey. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": null, "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Eckert" ], "title": "2018), contains vivid fluid motions with many vortices and high-frequency details. As shown in Fig. 11, our method can be used to narrow the gap between simulations and real-world phenomenon. B METRICS AND QUANTITATIVE ANALYSIS Spatial Metrics We evaluate all VSR methods with PSNR together with the human-calibrated", "venue": null, "year": 2018 }, { "authors": [ "Chen" ], "title": "2017) as a rough assessment of temporal differences. As shown on bottom of Table 4, T-diff, due to its local nature, is easily deceived by blurry method such as the bi-cubic interrelation and", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative adversarial models (GANs) have been extremely successful at learning complex distributions such as natural images (Zhu et al., 2017; Isola et al., 2017). However, for sequence generation, directly applying GANs without carefully engineered constraints typically results in strong artifacts over time due to the significant difficulties introduced by the temporal changes. In particular, conditional video generation tasks are very challenging learning problems where generators should not only learn to represent the data distribution of the target domain, but also learn to correlate the output distribution over time with conditional inputs. Their central objective is to faithfully reproduce the temporal dynamics of the target domain and not resort to trivial solutions such as features that arbitrarily appear and disappear over time.\nIn our work, we propose a novel adversarial learning method for a recurrent training approach that supervises both spatial content as well as temporal relationships. We apply our approach to two video-related tasks that offer substantially different challenges: video super-resolution (VSR) and unpaired video translation (UVT). With no ground truth motion available, the spatio-temporal adversarial loss and the recurrent structure enable our model to generate realistic results while keeping the generated structures coherent over time. With the two learning tasks we demonstrate how spatio-temporal adversarial training can be employed in paired as well as unpaired data domains. In addition to the adversarial network which supervises the short-term temporal coherence, long-term consistency is self-supervised using a novel bi-directional loss formulation, which we refer to as “Ping-Pong” (PP) loss in the following. The PP loss effectively avoids the temporal accumulation of artifacts, which can potentially benefit a variety of recurrent architectures. The central contributions of our work are: a spatio-temporal discriminator unit together with a careful analysis of training objectives for realistic and coherent video generation tasks, a novel PP loss supervising long-term consistency, in addition to a set of metrics for quantifying temporal coherence based on motion estimation and perceptual distance. Together, our contributions lead to models that outperform previous work in terms of temporally-coherent detail, which we quantify with a wide range of metrics and user studies." }, { "heading": "2 RELATED WORK", "text": "Deep learning has made great progress for image generation tasks. While regular losses such as L2 (Kim et al., 2016; Lai et al., 2017) offer good performance for image super-resolution (SR) tasks in terms of PSNR metrics, GAN researchers found adversarial training (Goodfellow et al., 2014) to significantly improve the perceptual quality in multi-modal problems including image SR (Ledig et al., 2016), image translations (Zhu et al., 2017; Isola et al., 2017), and others. Perceptual metrics (Zhang et al., 2018; Prashnani et al., 2018) are proposed to reliably evaluate image similarity by considering semantic features instead of pixel-wise errors.\nVideo generation tasks, on the other hand, require realistic results to change naturally over time. Recent works in VSR improve the spatial detail and temporal coherence by either using multiple low-resolution (LR) frames as inputs (Jo et al., 2018; Tao et al., 2017; Liu et al., 2017), or recurrently using previously estimated outputs (Sajjadi et al., 2018). The latter has the advantage to re-use high-frequency details over time. In general, adversarial learning is less explored for VSR and applying it in conjunction with a recurrent structure gives rise to a special form of temporal mode collapse, as we will explain below. For video translation tasks, GANs are more commonly used but discriminators typically only supervise the spatial content. E.g., Zhu et al. (2017) does not employ temporal constrains and generators can fail to learn the temporal cycle-consistency. In order to learn temporal dynamics, RecycleGAN (Bansal et al., 2018) proposes to use a prediction network in addition to a generator, while a concurrent work (Chen et al., 2019) chose to learn motion translation in addition to spatial content translation. Being orthogonal to these works, we propose a spatiotemporal adversarial training for both VSR and UVT and we show that temporal self-supervision is crucial for improving spatio-temporal correlations without sacrificing spatial detail. While L2 temporal losses based on warping are used to enforce temporal smoothness in video style transfer tasks (Ruder et al., 2016; Chen et al., 2017), concurrent GAN-based VSR work (Pérez-Pellitero et al., 2018) and UVT work (Park et al., 2019), it leads to an undesirable smooth over spatial detail and temporal changes in outputs. Likewise, the L2 temporal metric represents a sub-optimal way to quantify temporal coherence and perceptual metrics that evaluate natural temporal changes are\nunavailable up to now. We work on this open issue, propose two improved temporal metric and demonstrate the advantages of temporal self-supervision over direct temporal losses.\nPrevious work, e.g. tempoGAN (Xie et al., 2018) and vid2vid (Wang et al., 2018b), have proposed adversarial temporal losses to achieve time consistency. While tempoGAN employs a second temporal discriminator with multiple aligned frames to assess the realism of temporal changes, it is not suitable for videos, as it relies on ground truth motions and employs a single-frame processing that is sub-optimal for natural images. On the other hand, vid2vid focuses on paired video translations and proposes a video discriminator based on a conditional motion input that is estimated from the paired ground-truth sequences. We focus on more difficult unpaired translation tasks instead, and demonstrate the gains in quality of our approach in the evaluation section. For tracking and optical flow estimation, L2-based time-cycle losses (Wang et al., 2019) were proposed to constrain motions and tracked correspondences using symmetric video inputs. By optimizing indirectly via motion compensation or tracking, this loss improves the accuracy of the results. For video generation, we propose a PP loss that also makes use of symmetric sequences. However, we directly constrain the PP loss via the generated video content, which successfully improves the long-term temporal consistency in the video results." }, { "heading": "3 LEARNING TEMPORALLY COHERENT CONDITIONAL VIDEO GENERATION", "text": "Domain B\nVSR 𝐷 ,\n0/1\n𝑎\nFrameRecurrent Generator\n𝑔\n𝑤 Conditional LR Triplet 𝐼\n𝑎 𝑎 𝑎\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤+\n+\nUVT 𝐷 ,\n0/1\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤or\nor\nStatic Triplet 𝐼\n𝑔{ }x3 or\nStatic Triplet 𝐼\n𝑏{ }x3\nDomain A\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 →\n𝐺 →\n𝐺 →\nDomain A Domain B\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\na)\nDomain B\nVSR 𝐷 ,\n0/1\n𝑎\nFrameRecurrent Generator\n𝑔\n𝑤 Conditional LR Triplet 𝐼\n𝑎 𝑎 𝑎\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤+\n+\nUVT 𝐷 ,\n0/1\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤or\nor\nStatic Triplet 𝐼\n𝑔{ }x3 or\nStatic Triplet 𝐼\n𝑏{ }x3\nDomain A\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 →\n𝐺 →\n𝐺 →\nDomain A Domain B\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 → 𝐺 →\n𝐺 → b)\nFigure 2: a) G. b) The UVT cycle link using recurrent G.\nGenerative Network Before explaining the temporal self-supervision in more detail, we outline the generative model to be supervised. Our generator networks produce image sequences in a frame-recurrent manner with the help of a recurrent generator G and a flow estimator F . We follow previous work (Sajjadi et al., 2018), where G produces output gt in the target domain B from conditional input frame at from the input domain A, and recursively uses the previous generated output gt−1. F is trained to estimate the motion vt between at−1 and at, which is then used as a motion compensation that aligns gt−1 to the current frame. This procedure, also shown in Fig. 2a), can be summarized as: gt = G(at,W (gt−1, vt)), where vt = F(at−1, at) and W is the warping operation. While one generator is enough to map data from A to B for\npaired tasks such as VSR, unpaired generation requires a second generator to establish cycle consistency. (Zhu et al., 2017). In the UVT task, we use two recurrent generators, mapping from domain A to B and back. As shown in Fig. 2b), given ga→bt = Gab(at,W (g a→b t−1 , vt)), we can use at as the labeled data of ga→b→at = Gba(ga→bt ,W (ga→b→at−1 , vt)) to enforce consistency. A ResNet architecture is used for the VSR generator G and a encoder-decoder structure is applied to UVT generators and F . We intentionally keep generators simple and in line with previous work, in order to demonstrate the advantages of the temporal self-supervision that we will explain in the following paragraphs.\nDomain B\nVSR 𝐷 ,\n0/1\n𝑎\nFrameRecurrent Generator\n𝑔\n𝑤 Conditional LR Triplet 𝐼\n𝑎 𝑎 𝑎\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤+\n+\nUVT 𝐷 ,\n0/1\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤or\nor\nStatic Triplet 𝐼\n𝑔{ }x3 or\nStatic Triplet 𝐼\n𝑏{ }x3\nDomain A\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 →\n𝐺 →\n𝐺 →\nDomain A Domain B\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\nFigure 3: Conditional VSRDs,t.\nSpatio-Temporal Adversarial Self-Supervision The central building block of our approach is a novel spatio-temporal discriminator Ds,t that receives triplets of frames. This contrasts with typically used spatial discriminators which supervise only a single image. By concatenating multiple adjacent frames along the channel dimension, the frame triplets form an important building block for learning because they can provide networks with gradient information regarding the realism of spatial structures as well as short-term temporal information, such as first- and second-order time derivatives.\nWe propose a Ds,t architecture, illustrated in Fig. 3 and Fig. 4, that primarily receives two types of triplets: three adjacent frames and the corresponding warped ones. We warp later frames back-\nward and previous ones forward. While original frames contain the full spatio-temporal infor-\n3\nmation, warped frames more easily yield temporal information with their aligned content. For the input variants we use the following notation: Ig = {gt−1, gt, gt+1}, Ib = {bt−1, bt, bt+1}; Iwg = {W (gt−1, vt), gt,W (gt+1, v′t)}, Iwb = {W (bt−1, vt), bt,W (bt+1, v′t)}.\nFor VSR tasks, Ds,t should guide the generator to learn the correlation between LR inputs and highresolution (HR) targets. Therefore, three LR frames Ia = {at−1, at, at+1} from the input domain are used as a conditional input. The input of Ds,t can be summarized as Ibs,t = {Ib, Iwb, Ia} labelled as real and the generated inputs Igs,t = {Ig, Iwg, Ia} labelled as fake. In this way, the conditional Ds,t will penalize G if Ig contains less spatial details or unrealistic artifacts according to Ia, Ib. At the same time, temporal relationships between the generated images Iwg and those of the ground truth Iwb should match. With our setup, the discriminator profits from the warped frames to classify realistic and unnatural temporal changes, and for situations where the motion estimation is less accurate, the discriminator can fall back to the original, i.e. not warped, images.\nDomain B\nVSR 𝐷 ,\n0/1\n𝑎\nFrameRecurrent Generator\n𝑔\n𝑤 Conditional LR Triplet 𝐼\n𝑎 𝑎 𝑎\nOriginal Triplet 𝐼\n𝑔 𝑔 𝑔\nWarped Triplet 𝐼\n𝑔 𝑔 𝑤 𝑔 𝑤\nOriginal Triplet 𝐼\n𝑏 𝑏 𝑏\nWarped Triplet 𝐼\n𝑏 𝑏 𝑤 𝑏 𝑤+\n+\nDomain A\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 →\n𝐺 →\n𝐺 →\nDomain A Domain B\n𝑎 𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\n𝑏\n𝑔 →\n𝑔 → 𝐺 →\n𝐺 →\nFor UVT tasks, we demonstrate that the temporal cycleconsistency between different domains can be established using the supervision of unconditional spatio-temporal discriminators. This is in contrast to previous work which focuses on the generative networks to form spatio-temporal cycle links. Our approach actually yields improved results, as we will show below, and Fig. 1 shows a preview of the quality that can be achieved using spatio-temporal discriminators. In practice, we found it crucial to ensure that generators first learn reasonable spatial features, and only then improve their temporal correlation. Therefore, different to the Ds,t of VST that always receives 3 concatenated triplets as an input, the unconditional Ds,t of UVT only takes one triplet at a time. Focusing on the generated data, the input for a single\nbatch can either be a static triplet of Isg = {gt, gt, gt}, the warped triplet Iwg , or the original triplet Ig . The same holds for the reference data of the target domain, as shown in Fig. 4. With sufficient but complex information contained in these triplets, transition techniques are applied so that the network can consider the spatio-temporal information step by step, i.e., we initially start with 100% static triplets Isg as the input. Then, over the course of training, 25% of them transition to Iwg triplets with simpler temporal information, with another 25% transition to Ig afterwards, leading to a (50%,25%,25%) distribution of triplets. Details of the transition calculations are given in Appendix D. Here, the warping is again performed via F .\nWhile non-adversarial training typically employs loss formulations with static goals, the GAN training yields dynamic goals due to discriminative networks discovering the learning objectives over the course of the training run. Therefore, their inputs have strong influence on the training process and the final results. Modifying the inputs in a controlled manner can lead to different results and substantial improvements if done correctly, as will be shown in Sec. 4. Although the proposed concatenation of several frames seems like a simple change that has been used in a variety of projects, it is an important operation that allows discriminators to understand spatio-temporal data distributions. As will be shown below, it can effectively reduce temporal problems encountered by spatial GANs. While L2−based temporal losses are widely used in the field of video generation, the spatiotemporal adversarial loss is crucial for preventing the inference of blurred structures in multi-modal data-sets. Compared to GANs using multiple discriminators, the single Ds,t network can learn to balance the spatial and temporal aspects from the reference data and avoid inconsistent sharpness as well as overly smooth results. Additionally, by extracting shared spatio-temporal features, it allows for smaller network sizes.\nSelf-Supervision for Long-term Temporal Consistency When relying on a previous output as input, i.e., for frame-recurrent architectures, generated structures easily accumulate frame by frame. In an adversarial training, generators learn to heavily rely on previously generated frames and can easily converge towards strongly reinforcing spatial features over longer periods of time. For videos, this especially occurs along directions of motion, and these solutions can be seen as a special form of temporal mode collapse. We have noticed this issue in a variety of recurrent architectures, examples are shown in Fig. 5 a) and the Dst in Fig. 1. While this issue could be alleviated by training with longer sequences, we generally want generators to be able to work with sequences of arbitrary length for inference. To address this inherent problem of recurrent generators, we propose a new\nbi-directional “Ping-Pong” loss. For natural videos, a sequence with forward order as well as its reversed counterpart offer valid information. Thus, from any input of length n, we can construct a symmetric PP sequence in form of a1, ...an−1, an, an−1, ...a1 as shown in Fig. 5. When inferring this in a frame-recurrent manner, the generated result should not strengthen any invalid features from frame to frame. Rather, the result should stay close to valid information and be symmetric, i.e., the forward result gt = G(at, gt−1) and the one generated from the reversed part, g′t = G(at, g′t+1), should be identical. Based on this observation, we train our networks with extended PP sequences and constrain the generated outputs from both “legs” to be the same using the loss: Lpp = ∑n−1 i=1 ‖gt − gt′‖2 . Note that in contrast to the generator loss, the L2 norm is a correct choice here: We are not faced with multi-modal data where an L2 norm would lead to undesirable averaging, but rather aim to constrain the recurrent generator to its own, unique version over time. The PP terms provide constraints for short term consistency via ‖gn−1 − gn−1′‖2, while terms such as ‖g1 − g1′‖2 prevent long-term drifts of the results. As shown in Fig. 5(b), this PP loss successfully removes drifting artifacts while appropriate high-frequency details are preserved. In addition, it effectively extends the training data set, and as such represents a useful form of data augmentation. A comparison is shown in Appendix E to disentangle the effects of the augmentation of PP sequences and the temporal constrains. The results show that the temporal constraint is the key to reliably suppressing the temporal accumulation of artifacts, achieving consistency, and allowing models to infer much longer sequences than seen during training.\nPerceptual Loss Terms As perceptual metrics, both pre-trained NNs (Johnson et al., 2016; Wang et al., 2018a) and in-training discriminators (Xie et al., 2018) were successfully used in previous work. Here, we use feature maps from a pre-trained VGG-19 network (Simonyan & Zisserman, 2014), as well as Ds,t itself. In the VSR task, we can encourage the generator to produce features similar to the ground truth ones by increasing the cosine similarity between their feature maps. In UVT tasks without paired ground truth data, we still want the generators to match the distribution of features in the target domain. Similar to a style loss in traditional style transfer (Johnson et al., 2016), we here compute the Ds,t feature correlations measured by the Gram matrix instead. The feature maps of Ds,t contain both spatial and temporal information, and hence are especially well suited for the perceptual loss.\nLoss and Training Summary We now explain how to integrate the spatio-temporal discriminator into the paired and unpaired tasks. We use a standard discriminator loss for the Ds,t of VSR and a least-square discriminator loss for the Ds,t of UVT. Correspondingly, a non-saturated Ladv is used for the G and F of VSR, and a least-squares one is used for the UVT generators. As summarized in Table 1,G and F are trained with the mean squared loss Lcontent, adversarial losses Ladv , perceptual losses Lφ , the PP loss LPP , and a warping loss Lwarp, where again g, b and Φ stand for generated samples, ground truth images and feature maps of VGG-19 or Ds,t. We only show losses for the mapping from A to B for UVT tasks, as the backward mapping simply mirrors the terms. We refer to our full model for both tasks as TecoGAN below.1 Training parameters and details are given in Appendix G.\n1Source code, training data, and trained models will be published upon acceptance." }, { "heading": "4 ANALYSIS AND EVALUATION OF LEARNING OBJECTIVES", "text": "In the following, we illustrate the effects of temporal supervision using two ablation studies. In the first one, models trained with ablated loss functions show how Ladv and LPP change the overall learning objectives. Next, full UVT models are trained with different Ds,t inputs. This highlights how differently the corresponding discriminators converge to different spatio-temporal equilibriums, and the general importance of providing suitable data distributions from the target domain. While we provide qualitative and quantitative evaluations in the following, we also refer the reader to our supplemental html document 2, with video clips that more clearly highlight the temporal differences.\nLoss Ablation Study Below we compare variants of our full TecoGAN model to EnhanceNet (ENet) (Sajjadi et al., 2017), FRVSR (Sajjadi et al., 2018), and DUF (Jo et al., 2018) for VSR, and CycleGAN (Zhu et al., 2017) and RecycleGAN (Bansal et al., 2018) for UVT. Specifically, ENet and CycleGAN represent state-of-the-art single-image adversarial models without temporal information, FRVSR and DUF are state-of-the-art VSR methods without adversarial losses, and RecycleGAN is a spatial adversarial model with a prediction network learning the temporal evolution.\nFor VSR, we first train a DsOnly model that uses a frame-recurrent G and F with a VGG-19 loss and only the regular spatial discriminator. Compared to ENet, which exhibits strong incoherence due to the lack of temporal information, DsOnly improves temporal coherence thanks to the framerecurrent connection, but there are noticeable high-frequency changes between frames. The temporal profiles of DsOnly in Fig. 6 and 8, correspondingly contain sharp and broken lines. When adding a temporal discriminator in addition to the spatial one (DsDt), this version generates more\n2Anonymized and time-stamped supplemental material availabble at: https://www.dropbox.com/sh/n07l8n51slh1e9c/AAAVngT9xsSzs1pJQqe5xV1Oa?dl=0.\ncoherent results, and its temporal profiles are sharp and coherent. However, DsDt often produces the drifting artifacts discussed in Sec. 3, as the generator learns to reinforce existing details from previous frames to fool Ds with sharpness, and satisfying Dt with good temporal coherence in the form of persistent detail. While this strategy works for generating short sequences during training, the strengthening effect can lead to very undesirable artifacts for long-sequence inferences. By adding the self-supervision for long-term temporal consistency Lpp, we arrive at the DsDtPP model, which effectively suppresses these drifting artifacts with an improved temporal coherence. In Fig. 6 and Fig. 8, DsDtPP results in continuous yet detailed temporal profiles without streaks from temporal drifting. Although DsDtPP generates good results, it is difficult in practice to balance the generator and the two discriminators. The results shown here were achieved only after numerous runs manually tuning the weights of the different loss terms. By using the proposedDs,t discriminator instead, we get a first complete model for our method, denoted as TecoGAN . This network is trained with a discriminator that achieves an excellent quality with an effectively halved network size, as illustrated on the right of Fig. 7. The single discriminator correspondingly leads to a significant reduction in resource usage. Using two discriminators requires ca. 70% more GPU memory, and leads to a reduced training performance by ca. 20%. The TecoGAN model yields similar perceptual and temporal quality to DsDtPP with a significantly faster and more stable training.\nSince the TecoGAN model requires less training resources, we also trained a larger generator with 50% more weights. In the following we will focus on this larger single-discriminator architecture with PP loss as our full TecoGAN model for VSR. Compared to the TecoGAN model, it can generate more details, and the training process is more stable, indicating that the larger generator and Ds,t are more evenly balanced. Result images and temporal profiles are shown in Fig. 6 and Fig. 8. Video results are shown in Sec. 4 of the supplemental material.\nWe also carry out a similar ablation study for the UVT task. Again, we start from a single-image GAN-based model, a CycleGAN variant which already has two pairs of spatial generators and discriminators. Then, we train the DsOnly variant by adding flow estimation via F and extending the spatial generators to frame-recurrent ones. By augmenting the two discriminators to use the triplet inputs proposed in Sec. 3, we arrive at the Dst model with spatio-temporal discriminators, which does not yet use the PP loss. Although UVT tasks are substantially different from VSR tasks, the comparisons in Fig. 1 and Sec. 4.6 of our supplemental material yield similar conclusions. In these tests, we use renderings of 3D fluid simulations of rising smoke as our unpaired training data. These simulations are generated with randomized numerical simulations using a resolution of 643 for domain A and 2563 for domain B, and both are visualized with images of size 2562. Therefore, video translation from domain A to B is a tough task, as the latter contains significantly more turbulent and small-scale motions. With no temporal information available, the CycleGAN variant generates HR smoke that strongly flickers. The DsOnly model offers better temporal coherence by relying on its frame-recurrent input, but it learns a solution that largely ignores the current input and fails to keep reasonable spatio-temporal cycle-consistency links between the two domains. On the contrary, our Ds,t enables the Dst model to learn the correlation between the spatial and temporal aspects, thus improving the cycle-consistency. However, without Lpp, the Dst model (like the DsDt model of VSR) reinforces detail over time in an undesirable way. This manifests itself as inappropriate smoke density in empty regions. Using our full TecoGAN model which includes Lpp, yields the best results, with detailed smoke structures and very good spatio-temporal cycle-consistency.\nFor comparison, a DsDtPP model involving a larger number of separate networks, i.e. four discriminators, two frame-recurrent generators and the F , is trained. By weighting the temporal adversarial losses from Dt with 0.3 and the spatial ones from Ds with 0.5, we arrived at a balanced training run. Although this model performs similarly to the TecoGAN model on the smoke dataset, the proposed spatio-temporal Ds,t architecture represents a more preferable choice in practice, as it learns a natural balance of temporal and spatial components by itself, and requires fewer resources. Continuing along this direction, it will be interesting future work to evaluate variants, such as a shared Ds,t for both domains, i.e. a multi-class classifier network.\nBesides the smoke dataset, an ablation study for the Obama and Trump dataset from Fig. 1 shows a very similar behavior, as can be seen in the supplemental material.\nSpatio-temporal Adversarial Equilibriums Our evaluation so far highlights that temporal adversarial learning is crucial for achieving spatial detail that is coherent over time for VSR, and for enabling the generators to learn the spatio-temporal correlation between domains in UVT. Next, we\nwill shed light on the complex spatio-temporal adversarial learning objectives by varying the information provided to the discriminator network. The following tests Ds,t networks that are identical apart from changing inputs, and we focus on the smoke dataset.\nIn order to learn the spatial and temporal features of the target domain as well as their correlation, the simplest input for Ds,t consists of only the original, unwarped triplets, i.e. {Ig or Ib}. Using these, we train a baseline model, which yields a sub-optimal quality: it lacks sharp spatial structures, and contains coherent but dull motions. Despite containing the full information, these input triplets prevent Ds,t from providing the desired supervision. For paired video translation tasks, the vid2vid network achieves improved temporal coherence by using a video discriminator to supervise the output sequence conditioned with the ground-truth motion. With no ground-truth data available, we train a vid2vid variant by using the estimated motions and original triplets, i.e {Ig +F (gt−1, gt) +F (gt+1, gt) or Ib +F (bt−1, bt) +F (bt+1, bt)}, as the input for Ds,t. However, the result do not significantly improve. The motions are only partially reliable, and hence don’t help for the difficult unpaired translation task. Therefore, the discriminator still fails to fully correlate spatial and temporal features. We then train a third model, concat, using the original triplets and the warped ones, i.e. {Ig+Iwg or Ib+Iwb}. In this case, the model learns to generate more spatial details with a more vivid motion. I.e., the improved temporal information from the warped triplets gives the discriminator important cues. However, the motion still does not fully resemble the target domain. We arrive at our final TecoGAN model for UVT by controlling the composition of the input data: as outlined above, we first provide only static triplets {Isg or Isb}, and then apply the transitions of warped triplets {Iwg or Iwb}, and original triplets {Ig or Ib} over the course of training. In this way, the network can first learn to extract spatial features, and build on them to establish temporal features. Finally, discriminators learn features about the correlation of spatial and temporal content by analyzing the original triplets, and provide gradients such that the generators learn to use the motion information from the input and establish a correlation between the motions in the two unpaired domains. Consequently, the discriminator, despite receiving only a single triplet at once, can guide the generator to produce detailed structures that move coherently. Video comparisons are shown in Sec 5. of the supplemental material.\nResults and Metric Evaluation While the visual results discussed above provide a first indicator of the quality our approach achieves, quantitative evaluations are crucial for automated evaluations across larger numbers of samples. Below we focus on the VSR task as ground-truth data is available in this case. We conduct user studies and present evaluations of the different models w.r.t. established spatial metrics. We also motivate and propose two novel temporal metrics to quantify temporal coherence. A visual summary is shown in Fig. 7.\nFor evaluating image SR, Blau & Michaeli (2018) demonstrated that there is an inherent trade-off between the perceptual quality of the result and the distortion measured with vector norms or lowlevel structures such as PSNR and SSIM. On the other hand, metrics based on deep feature maps such as LPIPS (Zhang et al., 2018) can capture more semantic similarities. We measure the PSNR and LPIPS using the Vid4 scenes. With a PSNR decrease of less than 2dB over DUF which has twice the model size of ours, TecoGAN outperforms all methods by more than 40% on LPIPS.\nWhile traditional temporal metrics based on vector norm differences of warped frames, e.g. T-diff, can be easily deceived by very blurry results, e.g. bi-cubic interpolated ones, we propose to use a tandem of two new metrics, tOF and tLP, to measure the consistence over time. tOF measures the pixel-wise difference of motions estimated from sequences, and tLP measures perceptual changes over time using deep feature map:\ntOF = ‖OF (bt−1, bt)−OF (gt−1, gt)‖1 and tLP = ‖LP (bt−1, bt)− LP (gt−1, gt)‖1 , (1)\nwhere OF represents an optical flow estimation with LucasKanade (1981) and LP is the perceptual LPIPS metric. In tLP, the behavior of the reference is also considered, as natural videos exhibit a certain degree of changes over time. In conjunction, both pixel-wise differences and perceptual changes are crucial for quantifying realistic temporal coherence. While they could be combined into a single score, we list both measurements separately, as their relative importance could vary in different application settings. Our evaluation with these temporal metrics in Table 2 shows that all temporal adversarial models outperform spatial adversarial ones, and the full TecoGAN model performs very well: With a large amount of spatial detail, it still achieves good temporal coherence, on par with non-adversarial methods such as DUF and FRVSR. For VSR, we have confirmed these automated evaluations with several user studies. Across all of them, we find that the majority of the participants considered the TecoGAN results to be closest to the ground truth.\nFor the UVT tasks, where no ground-truth data is available, we can still evaluate tOF and tLP metrics by comparing the motion and the perceptual changes of the output data w.r.t. the ones from the input data , i.e., tOF = ∥∥OF (at−1, at)−OF (ga→bt−1 , ga→bt )∥∥1 and tLP= ∥∥LP (at−1, at)− LP (ga→bt−1 , ga→bt )∥∥1. With sharp spatial features and coherent motion, TecoGAN outperforms previous work on the Obama&Trump dataset, as shown in Table 3, although it is worth to point out that the tOF is less informative in this case, as the motion in the target domain is not necessarily pixel-wise aligned with the input. Overall, TecoGAN achieves good tLP scores thanks to its temporal coherence, on par with RecycleGAN, and its spatial detail is on par with CycleGAN. As for VSR, a perceptual evaluation by humans in the right column of Table 3 confirms our metric evaluations for the UVT task (details in Appendix C)." }, { "heading": "5 CONCLUSIONS AND DISCUSSION", "text": "In paired as well as unpaired data domains, we have demonstrated that it is possible to learn stable temporal functions with GANs thanks to the proposed discriminator architecture and PP loss. We have shown that this yields coherent and sharp details for VSR problems that go beyond what can be achieved with direct supervision. In UVT, we have shown that our architecture guides the training process to successfully establish the spatio-temporal cycle consistency between two domains. These results are reflected in the proposed metrics and user studies.\nWhile our method generates very realistic results for a wide range of natural images, our method can generate temporally coherent yet sub-optimal details in certain cases such as under-resolved faces and text in VSR, or UVT tasks with strongly different motion between two domains. For the latter case, it would be interesting to apply both our method and motion translation from concurrent work (Chen et al., 2019). This can make it easier for the generator to learn from our temporal self supervision. In our method, the interplay of the different loss terms in the non-linear training procedure does not provide a guarantee that all goals are fully reached every time. However, we found our method to be stable over a large number of training runs, and we anticipate that it will provide a very useful basis for wide range of generative models for temporal data sets." }, { "heading": "A QUALITATIVE ANALYSIS", "text": "For the VSR task, we test our model on a wide range of video data, including the generally used Vid4 dataset shown in Fig. 8 and 12, detailed scenes from the movie Tears of Steel (ToS, 2011) shown in Fig. 12, and others shown in Fig. 9. As mentioned in the main document, the TecoGAN model is trained with down-sampled inputs and it can similarly work with original images that were not down-sampled or filtered, such as a data-set of real-world photos (Liao et al., 2015). In Fig. 10, we compared our results to two other methods (Liao et al., 2015; Tao et al., 2017) that have used the same dataset. With the help of adversarial learning, our model is able to generate improved and realistic details in down-sampled images as well as captured images.\nG T\na D U F\nEN et\nO urs:\nD sD tPP D sD t\nTecoG AN\nTeco G\nAN a -\nD sO\nnly𝑡\nTemporal profiles 𝑥\nFRVSR\nGTa\nDUF\nENetDsDtPP TecoGANa\nTecoGANaa-\nDsOnly\nDsDt FRVSR\nCalendar scene\nGTa\nDUF\nENet\nO urs:\nDsDtPP\nDsDt\nTecoGAN\nTecoGANΘ\n𝑡\nTemporal profiles 𝑥\nFRVSR\nFigure 8: VSR temporal profile comparisons of the calendar scene (time shown along y-axis). TecoGAN models lead to natural temporal progressions, and our final model closely matches the desired ground truth behavior over time.\n13\nFor UVT tasks, we train models for Obama and Trump translations, LR- and HR- smoke simulation translations, as well as translations between smoke simulations and real-smoke captures. While smoke simulations usually contain strong numerical viscosity with details limited by the simulation resolution, the real smoke, captured using the setup from Eckert et al. (2018), contains vivid fluid motions with many vortices and high-frequency details. As shown in Fig. 11, our method can be used to narrow the gap between simulations and real-world phenomenon." }, { "heading": "B METRICS AND QUANTITATIVE ANALYSIS", "text": "Spatial Metrics We evaluate all VSR methods with PSNR together with the human-calibrated LPIPS metric (Zhang et al., 2018). While higher PSNR values indicate a better pixel-wise accuracy, lower LPIPS values represent better perceptual quality and closer semantic similarity. Mean values of the Vid4 scenes Liu & Sun (2011) are shown on the top of Table 4. Trained with direct vector norms losses, FRVSR and DUF achieve high PSNR scores. However, the undesirable smoothing induced by these losses manifests themselves in larger LPIPS distances. ENet, on the other hand, with no information from neighboring frames, yields the lowest PSNR and achieves an LPIPS score that is only slightly better than DUF and FRVSR. TecoGAN model with adversarial training achieves an excellent LPIPS score, with a PSNR decrease of less than 2dB over DUF, which is very reasonable, since PSNR and perceptual quality were shown to be anti-correlated (Blau & Michaeli, 2018), especially in regions where PSNR is very high. Based on good perceptual quality and reasonable pixel-wise accuracy, TecoGAN outperforms all other methods by more than 40% for LPIPS.\nTemporal Metrics For both VSR and UVT, evaluating temporal coherence without ground-truth motion is a very challenging problem. The metric T-diff = ‖gt −W (gt−1, vt)‖1 was used by Chen et al. (2017) as a rough assessment of temporal differences. As shown on bottom of Table 4, T-diff, due to its local nature, is easily deceived by blurry method such as the bi-cubic interrelation and\ncan not correlate well with visual assessments of coherence. By measuring the pixel-wise motion difference using tOF in together with the perceptual changes over time using tLP, we show the temporal evaluations for the VSR task in the middle of Table 4. Not surprisingly, the results of ENet show larger errors for all metrics due to their strongly flickering content. Bi-cubic up-sampling, DUF, and FRVSR achieve very low T-diff errors due to their smooth results, representing an easy, but undesirable avenue for achieving coherency. However, the overly smooth changes of the former two are identified by the tLP scores.While our DsOnly model generates sharper results at the expense of temporal coherence, it still outperforms ENet there. By adding temporal information to discriminators, our DsDt, DsDt+PP, TecoGAN and TecoGAN improve in terms of temporal metrics. Especially the full TecoGAN model stands out here. For the UVT tasks, temporal motions are evaluated by comparing to the input sequence. With sharp spatial features and coherent motion, TecoGAN outperforms previous work on the Obama&Trump dataset, as shown in Table 3.\nSpatio-temporal Evaluations Since temporal metrics can trivially be reduced for blurry image content, we found it important to evaluate results with a combination of spatial and temporal metrics. Given that perceptual metrics are already widely used for image evaluations, we believe it is the right time to consider perceptual changes in temporal evaluations, as we did with our proposed temporal coherence metrics. Although not perfect, they are not easily deceived. Specifically, tOF is more robust than a direct pixel-wise metric as it compares motions instead of image content. In the supplemental material, we visualize the motion difference and it can well reflect the visual inconsistencies. On the other hand, we found that our calculation of tLP is a general concept that can work reliably with different perceptual metric: When repeating the tLP evaluation with the PieAPP metric (Prashnani et al., 2018) instead of LP , i.e., tPieP = ‖f(yt−1, yt)− f(gt−1, gt)‖1 , where f(·) indicates the perceptual error function of PieAPP, we get close to identical results, listed in Fig. 13. The conclusions from tPieP also closely match the LPIPS-based evaluation: our network architecture can generate realistic and temporally coher-\nent detail, and the metrics we propose allow for a stable, automated evaluation of the temporal perception of a generated video sequence.\nBesides the previously evaluated the Vid4 dataset, with graphs shown in Fig. 14, 15, we also get similar evaluation results on the Tears of Steel data-sets (room, bridge, and face, in the following referred to as ToS scenes) and corresponding results are shown in Table 5 and Fig. 16. In all tests, we follow the procedures of previous work (Jo et al., 2018; Sajjadi et al., 2018) to make the outputs of\nall methods comparable, i.e., for all result images, we first exclude spatial borders with a distance of 8 pixels to the image sides, then further shrink borders such that the LR input image is divisible by 8 and for spatial metrics, we ignore the first two and the last two frames, while for temporal metrics, we ignore first three and last two frames, as an additional previous frame is required for inference. In the following, we conduct user studies for the Vid4 scenes. By comparing the user study results and the metric breakdowns shown in Table 4, we found our metrics to reliably capture the human temporal perception, as shown in Appendix C." }, { "heading": "C USER STUDIES", "text": "We conduct several user studies for the VSR task using five different methods, namely bi-cubic interpolation, ENet, FRVSR, DUF and our TecoGAN. The established 2AFC design (Fechner & Wundt, 1889; Um et al., 2017) is applied, i.e., participants have a pair-wise choice, with the groundtruth video shown as reference. One example can be seen in Fig. 17. The videos are synchronized and looped until user made the final decision. With no control to stop videos, users Participants cannot stop or influence the playback, and hence can focus more on the whole video, instead of specific spatial details. Videos positions (left/A or right/B) are randomized.\nAfter collecting 1000 votes from 50 users for every scene, i.e. twice for all possible pairs (5×4/2 = 10 pairs), we follow common procedure and compute scores for all models with the Bradley-Terry model (1952). The outcomes for the Vid4 scenes can be seen in Fig. 18 (overall scores are listed in Table 2 of the main document).\nFrom the Bradley-Terry scores for the Vid4 scenes we can see that the TecoGAN model performs very well, and achieves the first place in three cases, as well as a second place in the walk scene. The latter is most likely caused by the overall slightly smoother images of the walk scene, in conjunction with the presence of several human faces, where our model can lead to the generation of unexpected\ndetails. However, overall the user study shows that users preferred the TecoGAN output over the other two deep-learning methods with a 63.5% probability.\nThis result also matches with our metric evaluations. In Table 4, while TecoGAN achieves spatial (LPIPS) improvements in all scenes, DUF and FRVSR are not far behind in the walk scene. In terms of temporal metrics tOF and tLP, TecoGAN achieves similar or lower scores compared to FRVSR and DUF for calendar, foliage and city scenes. The lower performance of our model for the walk scene is likewise captured by higher tOF and tLP scores. Overall, the metrics confirm the performance of our TecoGAN approach and match the results of the user studies, which indicate that our proposed temporal metrics successfully capture important temporal aspects of human perception.\nFor UVT tasks which have no ground-truth data, we carried out two sets of user studies: One uses an arbitrary sample from the target domain as the reference and the other uses the actual input from the source domain as the reference. On the Obama&Trump data-sets, we evaluate results from CycleGAN, RecycleGAN, and TecoGAN following the same modality, i.e. a 2AFC design with 50 users for each run. E.g., on the left of Fig. 19, users evaluate the generated Obama in reference with the input Trump on the y-axis, while an arbitrary Obama video is shown as the reference on the x-axis. Effectively, the y-axis is more important than the x-axis as it indicates whether the translated result preserves the original expression. A consistent ranking of TecoGAN > RecycleGAN > CycleGAN is shown on the y-axis with clear separations, i.e. standard errors don’t overlap. The x-axis indicates whether the inferred result matches the general spatio-temporal content of the target domain. Our TecoGAN model also receives the highest scores here, although the responses are slightly more spread out. On the right of Fig. 19, we summarize both studies in a single graph highlighting that the TecoGAN model is consistently preferred by the participants of our user studies." }, { "heading": "D TECHNICAL DETAILS OF THE SPATIO-TEMPORAL DISCRIMINATOR", "text": "Motion Compensation Used in Warped Triplet In the TecoGAN architecture, Ds,t detects the temporal relationships between INgs,t and IN y s,t with the help of the flow estimation network F. However, at the boundary of images, the output of F is usually less accurate due to the lack of reliable neighborhood information. There is a higher chance that objects move into the field of view, or leave suddenly, which significantly affects the images warped with the inferred motion. An example is shown in Fig. 20. This increases the difficulty for Ds,t, as it cannot fully rely on the images being aligned via warping. To alleviate this problem, we only use the center region of INgs,t and IN y s,t as the input of the discriminator, and we reset a boundary of 16 pixels. Thus, for an input resolution of INgs,t and IN y s,t of\n128 × 128 for the VSR task, the inner part in size of 96 × 96 is left untouched, while the border regions are overwritten with zeros.\nThe flow estimation network F with the loss LG,F should only be trained to support G in reaching the output quality as determined by Ds,t, but not the other way around. The latter could lead to F networks that confuse Ds,t with strong distortions of IN g s,t and IN y s,t. In order to avoid the this undesirable case, we stop the gradient back propagation from INgs,t and IN y s,t to F. In this way, gradients from Ds,t to F are only back propagated through the generated samples gt−1, gt and gt+1 into the generator network. In this way Ds,t can guide G to improve the image content, and F learns\nA B\nReference\n01/20 Which one is closer to the reference video? O A O B\nFigure 17: A sample setup of user study.\nto warp the previous frame in accordance with the detail that G can synthesize. However, F does not adjust the motion estimation only to reduce the adversarial loss.\nCurriculum Learning for UVT Discriminators As mentioned in the main part, we train the UVT Ds,t with 100% spatial triplets at the very beginning. During training, 25% of them gradually transfer into warped triplets and another 25% transfer into original triplets. The transfer of the warped triplets can be represented as: (1−α)Icg+αIwg , with α growing form 0 to 1. For the original triplets, we additionally fade the “warping” operation out by using (1 − α)Icg + α{W (gt−1, vt ∗ β), gt,W (gt+1, v ′ t ∗ β)}, again with α growing form 0 to 1 and β decreasing from 1 to 0. We found this smooth transition to be helpful for a stable training." }, { "heading": "E DATA AUGMENTATION AND TEMPORAL CONSTRAINS IN THE PP LOSS", "text": "Since training with sequences of arbitrary length is not possible with current hardware, problems such as the streaking artifacts discussed above generally arise for recurrent models. In the proposed PP loss, both the Ping-Pang data augmentation and the temporal consistency constraint contribute to solving these problems. In order to show their separated contributions, we trained another TecoGAN variant that only employs the data augmentation without the constraint (i.e., λp = 0 in Table 1).\nDenoted as PP-Augment, we show its results in comparison with the DsDt and TecoGAN models in Fig. 21. Video results are shown in the in the supplemental material.\nDuring training, the generator of DsDt receives 10 frames, and generators of PP-Augment and TecoGAN see 19 frames. While DsDt shows strong recurrent accumulation artifacts early on, the PP-Augment version slightly reduces the artifacts. In Fig. 21, it works good for frame 15, but shows artifacts from frame 32 on. Only our regular model (TecoGAN ) successfully avoids temporal accumulation for all 40 frames. Hence, with the PP constraint, the model avoids recurrent accumulation of artifacts and works well for sequences that are substantially longer than the training length.\nAmong others, we have tested our model with ToS sequences of lengths 150, 166 and 233. For all of these sequences, the TecoGAN model successfully avoids temporal accumulation or streaking artifacts." }, { "heading": "F NETWORK ARCHITECTURE", "text": "In this section, we use the following notation to specify all network architectures used: conc() represents the concatenation of two tensors along the channel dimension; C/CT (input, kernel size, output channel, stride size) stands for the convolution and transposed convolution operation, respectively; “+” denotes element-wise addition; BilinearUp2 up-samples input tensors by a factor of 2 using bi-linear interpolation; BicubicResize4(input) increases the resolution of the input tensor to 4 times higher via bi-cubic up-sampling; Dense(input, output size) is a densely-connected layer, which uses Xavier initialization for the kernel weights.\nThe architecture of our VSR generator G is:\nconc(xt,W (gt−1, vt))→ lin ; C(lin, 3, 64, 1),ReLU→ l0; ResidualBlock(li)→ li+1 with i = 0, ..., n− 1;\nCT (ln, 3, 64, 2),ReLU→ lup2; CT (lup2, 3, 64, 2),ReLU→ lup4; C(lup4, 3, 3, 1),ReLU→ lres; BicubicResize4(xt) + lres → gt .\nIn TecoGAN , there are 10 sequential residual blocks in the generator ( ln = l10 ), while the TecoGAN generator has 16 residual blocks ( ln = l16 ). Each ResidualBlock(li) contains the following operations: C(li, 3, 64, 1),ReLU→ ri; C(ri, 3, 64, 1) + li → li+1. The VSR Ds,t’s architecture is:\nINgs,t or IN y s,t → lin; C(lin, 3, 64, 1),Leaky ReLU→ l0;\nC(l0, 4, 64, 2),BatchNorm,Leaky ReLU→ l1; C(l1, 4, 64, 2),BatchNorm,Leaky ReLU→ l2; C(l2, 4, 128, 2),BatchNorm,Leaky ReLU→ l3; C(l3, 4, 256, 2),BatchNorm,Leaky ReLU→ l4;\nDense(l4, 1), sigmoid→ lout .\nVSR discriminators used in our variant models, DsDt, DsDtPP and DsOnly, have a similar architecture as Ds,t. They only differ in terms of their inputs.\nThe flow estimation network F has the following architecture:\nconc(xt, xt−1)→ lin; C(lin, 3, 32, 1),Leaky ReLU→ l0; C(l0, 3, 32, 1),Leaky ReLU,MaxPooling→ l1; C(l1, 3, 64, 1),Leaky ReLU→ l2; C(l2, 3, 64, 1),Leaky ReLU,MaxPooling→ l3; C(l3, 3, 128, 1),Leaky ReLU→ l4; C(l4, 3, 128, 1),Leaky ReLU,MaxPooling→ l5; C(l5, 3, 256, 1),Leaky ReLU→ l6; C(l6, 3, 256, 1),Leaky ReLU,BilinearUp2→ l7; C(l7, 3, 128, 1),Leaky ReLU→ l8;\nC(l8, 3, 128, 1),Leaky ReLU,BilinearUp2→ l9; C(l9, 3, 64, 1),Leaky ReLU→ l10; C(l10, 3, 64, 1),Leaky ReLU,BilinearUp2→ l11; C(l11, 3, 32, 1),Leaky ReLU→ l12;\nC(l12, 3, 2, 1), tanh→ lout; lout ∗MaxVel→ vt .\nHere, MaxVel is a constant vector, which scales the network output to the normal velocity range.\nWhile F is the same for UVT tasks, UVT generators have an encoder-decoder structure:\nconc(xt,W (gt−1, vt))→ lin ; C(lin, 7, 32, 1), InstanceNorm,ReLU→ l0; C(l0, 3, 64, 2), InstanceNorm,ReLU→ l1; C(l1, 3, 128, 2), InstanceNorm,ReLU→ l2;\nResidualBlock(l2 + i)→ l3+i with i = 0, ..., n− 1; CT (ln+2, 3, 64, 2), InstanceNorm,ReLU→ ln+3; CT (ln+3, 3, 32, 2), InstanceNorm,ReLU→ ln+4;\nCT (ln+4, 7, 3, 1), tanh→ lout\nResidualBlock(l2 + i) contains the following operations: C(l2+i, 3, 128, 1), InstanceNorm,ReLU → t2+i ;C(t2+i, 3, 128, 1), InstanceNorm → r2+i; r2+i + l2+i → l3+i. We use 10 residual blocks for all UVT generators.\nSince UVT generators are larger than the VSR generator, we also use a larger Ds,t architecture:\nINgs,t or IN y s,t → lin; C(lin, 4, 64, 24),ReLU→ l0;\nC(l0, 4, 128, 2), InstanceNorm,Leaky ReLU→ l1; C(l1, 4, 256, 2), InstanceNorm,Leaky ReLU→ l2; C(l2, 4, 512, 2), InstanceNorm,Leaky ReLU→ l3; Dense(l3, 1)→ lout .\nAgain, all ablation studies use the same architecture with different inputs." }, { "heading": "G TRAINING DETAILS", "text": "We use the non-saturated GAN for VSR and LSGAN (Mao et al., 2017) for UVT and both of them can prevent the gradient vanishing problem of a vanilla GAN (Goodfellow et al., 2014). While we train stably with a dynamic discriminator updating strategy, i.e. discriminators are not updated when there is already a large difference betweenD(Ib) andD(Ig), the training process could potentially be further improved with modern GAN algorithms, e.g. Wasserstein GAN (Gulrajani et al., 2017).We train G and F together for VSR , while we simply use the pre-trained F for UVT.\nFor the VSR task, our training data-set consists of 250 short HR videos, each with 120 frames. We use sequences with a length of 10 and a batch size of 4. A black image is used as the first previous frame of each video sequence. I.e., one batch contains 40 frames and with the PP loss formulation, the NN receives gradients from 76 frames in total for every training iteration. To improve the stability of the adversarial training, we pre-train G and F with a simple L2 loss of∑ ‖gt − bt‖2 + λwLwarp for 500k batches. We use 900k batches for the adversarial training stage. The data-sets of the UVT tasks contain around 2400 to 3600 frames. We train the generators with a sequence length of 6 and a batch size of 1. Since temporal triplets are gradually faded in, we do not pre-train models for UVT tasks. With smaller datasets, we train UVT models with 100k batches.\nIn the pre-training stage of VSR, we train the F and a generator with 10 residual blocks. An ADAM optimizer with β = 0.9 is used throughout. The learning rate starts from 10−4 and decays by 50% every 50k batches until it reaches 2.5 ∗ 10−5. This pre-trained model is then used for all TecoGAN variants as initial state. In the adversarial training stage of VSR, all TecoGAN variants are trained with a fixed learning rate of 5 ∗ 10−5. The generators in DsOnly, DsDt, DsDtPP and TecoGAN have 10 residual blocks, whereas the TecoGAN model has 6 additional residual blocks in its generator. Therefore, after loading 10 residual blocks from the pre-trained model, these additional residual blocks are faded in smoothly with a factor of 2.5 ∗ 10−5. We found this growing training methodology, first introduced by Growing GAN (Karras et al., 2017), to be stable and efficient in our tests. When training the VSR DsDt and DsDtPP, extra parameters are used to balance the two cooperating discriminators properly. Through experiments, we found Dt to be stronger. Therefore, we reduce the learning rate of Dt to 1.5 ∗ 10−5 in order to keep both discriminators balanced. At the same time, a factor of 0.0003 is used on the temporal adversarial loss to the generator, while the spatial adversarial loss has a factor of 0.001. During the VSR training, input LR video frames are cropped to a size of 32 × 32. In all VSR models, the Leaky ReLU operation uses a tangent of 0.2 for the negative half space. Additional training parameters are listed in Table 6.\nFor all UVT tasks, we use a learning rate of 10−4 to train the first 90k batches and the last 10k batches are trained with the learning rate decay from 10−4 to 0. Images of the input domain are cropped into a size of 256 × 256 when training, while the original size is 288 × 288. While the Additional training parameters are also listed in Table 6. For UVT, Lcontent and Lφ are only used to improve the convergence of the training process. We fade out the Lcontent in the first 10k batches and the Lφ is used for the first 80k and faded out in last 20k." }, { "heading": "H PERFORMANCE", "text": "TecoGAN is implemented in TensorFlow. While generator and discriminator are trained together, we only need the trained generator network for the inference of new outputs after training, i.e., the whole discriminator network can be discarded. We evaluate the models on a Nvidia GeForce GTX 1080Ti GPU with 11G memory, the resulting VSR performance for which is given in Table 2.\nThe VSR TecoGAN model and FRVSR have the same number of weights (843587 in the SRNet, i.e. generator network, and 1.7M in F), and thus show very similar performance characteristics with around 37 ms spent for one frame. The larger VSR TecoGAN model with 1286723 weights in the generator is slightly slower than TecoGAN , spending 42 ms per frame. In the UVT task, generators spend around 60 ms per frame with a size of 512× 512. However, compared with the DUF model, with has more than 6 million weights in total, the TecoGAN performance significantly better thanks to its reduced size." } ]
2,019
null
SP:5c78aac08d907ff07205fe28bf9fa4385c58f40d
[ "This paper proposes a new method for training certifiably robust models that achieves better results than the previous SOTA results by IBP, with a moderate increase in training time. It uses a CROWN-based bound in the warm up phase of IBP, which serves as a better initialization for the later phase of IBP and lead to improvements in both robust and standard accuracy. The CROWN-based bound uses IBP to compute bounds for intermediate pre-activations and applies CROWN only to computing the bounds of the margins, which has a complexity between IBP and CROWN. The experimental results are verify detailed to demonstrate the improvement.", "This work proposes CROWN-IBP - novel and efficient certified defense method against adversarial attacks, by combining linear relaxation methods which tend to have tighter bounds with the more efficient interval-based methods. With an attempt to augment the IBP method with its lower computation complexity with the tight CROWN bounds, to get the best of both worlds. One of the primary contributions here is that reduction of computation complexity by an order of \\Ln while maintaining similar or better bounds on error. The authors show compelling results with varied sized networks on both MNIST and CIFAR dataset, providing significant improvements over past baselines." ]
Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in `∞ robustness. Notably, we achieve 7.02% verified test error on MNIST at = 0.3, and 66.94% on CIFAR-10 with = 8/255.
[ { "affiliations": [], "name": "Huan Zhang" }, { "affiliations": [], "name": "Hongge Chen" }, { "affiliations": [], "name": "Chaowei Xiao" }, { "affiliations": [], "name": "Sven Gowal" }, { "affiliations": [], "name": "Robert Stanforth" }, { "affiliations": [], "name": "Bo Li" }, { "affiliations": [], "name": "Duane Boning" }, { "affiliations": [], "name": "Cho-Jui Hsieh" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Hongge Chen", "Huan Zhang", "Pin-Yu Chen", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Attacking visual language grounding with adversarial examples: A case study on neural image captioning", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Krishnamurthy Dvijotham", "Marta Garnelo", "Alhussein Fawzi", "Pushmeet Kohli" ], "title": "Verification of deep probabilistic models", "venue": "CoRR, abs/1812.02795,", "year": 2018 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned verifiers", "venue": "arXiv preprint arXiv:1805.10265,", "year": 2018 }, { "authors": [ "Krishnamurthy Dvijotham", "Robert Stanforth", "Sven Gowal", "Timothy Mann", "Pushmeet Kohli" ], "title": "A dual approach to scalable verification of deep networks. UAI, 2018c", "venue": null, "year": 2018 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning visual classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Warren He", "James Wei", "Xinyun Chen", "Nicholas Carlini", "Dawn Song" ], "title": "Adversarial example defenses: ensembles of weak defenses are not strong", "venue": "In Proceedings of the 11th USENIX Conference on Offensive Technologies,", "year": 2017 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L Dill", "Kyle Julian", "Mykel J Kochenderfer" ], "title": "Reluplex: An efficient SMT solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "arXiv preprint arXiv:1802.03471,", "year": 2018 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Second-order adversarial attack and certifiable robustness", "venue": "arXiv preprint arXiv:1809.03113,", "year": 2018 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Michael E Houle", "Grant Schoenebeck", "Dawn Song", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Matthew Mirman", "Gagandeep Singh", "Martin Vechev" ], "title": "A provable defense for deep residual networks", "venue": "arXiv preprint arXiv:1903.12519,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Chongli Qin", "Krishnamurthy Dj Dvijotham", "Brendan O’Donoghue", "Rudy Bunel", "Robert Stanforth", "Sven Gowal", "Jonathan Uesato", "Grzegorz Swirszcz", "Pushmeet Kohli" ], "title": "Verification of non-linear specifications for neural networks", "venue": null, "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "International Conference on Learning Representations (ICLR), arXiv preprint arXiv:1801.09344,", "year": 2018 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hadi Salman", "Greg Yang", "Jerry Li", "Pengchuan Zhang", "Huan Zhang", "Ilya Razenshteyn", "Sebastien Bubeck" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "arXiv preprint arXiv:1906.04584,", "year": 2019 }, { "authors": [ "Hadi Salman", "Greg Yang", "Huan Zhang", "Cho-Jui Hsieh", "Pengchuan Zhang" ], "title": "A convex relaxation barrier to tight robust verification of neural networks", "venue": "arXiv preprint arXiv:1902.08722,", "year": 2019 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-GAN: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Matthew Mirman", "Markus Püschel", "Martin Vechev" ], "title": "Fast and effective robustness certification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "Robustness certification with refinement", "venue": null, "year": 2019 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Shiqi Wang", "Yizheng Chen", "Ahmed Abdou", "Suman Jana" ], "title": "Mixtrain: Scalable training of formally robust neural networks", "venue": "arXiv preprint arXiv:1811.02625,", "year": 2018 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Efficient formal safety analysis of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Duane Boning", "Inderjit S Dhillon", "Luca Daniel" ], "title": "Towards fast computation of certified robustness for ReLU networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Ruizhi Deng", "Bo Li", "Fisher Yu", "Mingyan Liu", "Dawn Song" ], "title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Bo Li", "Jun-Yan Zhu", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Generating adversarial examples with adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "Chaowei Xiao", "Ruizhi Deng", "Bo Li", "Taesung Lee", "Benjamin Edwards", "Jinfeng Yi", "Dawn Song", "Mingyan Liu", "Ian Molloy" ], "title": "Advit: Adversarial frames identifier based on temporal consistency in videos", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Chaowei Xiao", "Dawei Yang", "Bo Li", "Jia Deng", "Mingyan Liu" ], "title": "Meshadv: Adversarial meshes for visual recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kai Y Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry" ], "title": "Training for faster adversarial robustness verification via inducing relu stability. ICLR, 2019c", "venue": null, "year": 2019 }, { "authors": [ "Kaidi Xu", "Sijia Liu", "Pu Zhao", "Pin-Yu Chen", "Huan Zhang", "Quanfu Fan", "Deniz Erdogmus", "Yanzhi Wang", "Xue Lin" ], "title": "Structured adversarial attack: Towards general implementation and better interpretability", "venue": "arXiv preprint arXiv:1808.01664,", "year": 2018 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Zhao Song", "Duane Boning", "Inderjit S Dhillon", "Cho-Jui Hsieh" ], "title": "The limitations of adversarial training and the blind-spot attack. ICLR, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Huan Zhang", "Pengchuan Zhang", "Cho-Jui Hsieh" ], "title": "Recurjac: An efficient recursive algorithm for bounding jacobian matrix of neural networks and its applications", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Tianhang Zheng", "Changyou Chen", "Kui Ren" ], "title": "Distributionally adversarial attack", "venue": "arXiv preprint arXiv:1808.05537,", "year": 2018 }, { "authors": [ "Gowal" ], "title": "CONV k w×h+s” represents a 2D convolutional layer with k filters of size w×h using a stride of s in both dimensions. “FC n” = fully connected layer with n outputs. Last fully connected layer is omitted. All networks use ReLU activation functions. D HYPERPARAMETERS AND MODEL STRUCTURES FOR TRAINING STABILITY", "venue": null, "year": 2018 }, { "authors": [ "E OMITTED" ], "title": "RESULTS ON DM-SMALL AND DM-MEDIUM MODELS In Table 2 we report results from the best DM-Large model. Table C presents the verified, standard (clean) and PGD attack errors for all three model structures used in (Gowal et al., 2018) (DM-Small, DM-Medium and DM-Large) trained on MNIST and CIFAR-10 datasets. We evaluate IBP and CROWN-IBP under the same three κ settings", "venue": null, "year": 2018 }, { "authors": [ "Gowal" ], "title": "Verified errors reported", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of deep neural networks (DNNs) has motivated their deployment in some safety-critical environments, such as autonomous driving and facial recognition systems. Applications in these areas make understanding the robustness and security of deep neural networks urgently needed, especially their resilience under malicious, finely crafted inputs. Unfortunately, the performance of DNNs are often so brittle that even imperceptibly modified inputs, also known as adversarial examples, are able to completely break the model (Goodfellow et al., 2015; Szegedy et al., 2013). The robustness of DNNs under adversarial examples is well-studied from both attack (crafting powerful adversarial examples) and defence (making the model more robust) perspectives (Athalye et al., 2018; Carlini & Wagner, 2017a;b; Goodfellow et al., 2015; Madry et al., 2018; Papernot et al., 2016; Xiao et al., 2019b; 2018b;c; Eykholt et al., 2018; Chen et al., 2018; Xu et al., 2018; Zhang et al., 2019b). Recently, it has been shown that defending against adversarial examples is a very difficult task, especially under strong and adaptive attacks. Early defenses such as distillation (Papernot et al., 2016) have been broken by stronger attacks like C&W (Carlini & Wagner, 2017b). Many defense methods have been proposed recently (Guo et al., 2018; Song et al., 2017; Buckman et al., 2018; Ma et al., 2018; Samangouei et al., 2018; Xiao et al., 2018a; 2019a), but their robustness improvement cannot be certified – no provable guarantees can be given to verify their robustness. In fact, most of these uncertified defenses become vulnerable under stronger attacks (Athalye et al., 2018; He et al., 2017).\nSeveral recent works in the literature seeking to give provable guarantees on the robustness performance, such as linear relaxations (Wong & Kolter, 2018; Mirman et al., 2018; Wang et al., 2018a; Dvijotham et al., 2018b; Weng et al., 2018; Zhang et al., 2018), interval bound propagation (Mirman et al., 2018; Gowal et al., 2018), ReLU stability regularization (Xiao et al., 2019c), and distributionally\n∗Work partially done during an internship at DeepMind.\nrobust optimization (Sinha et al., 2018) and semidefinite relaxations (Raghunathan et al., 2018a; Dvijotham et al.). Linear relaxations of neural networks, first proposed by Wong & Kolter (2018), is one of the most popular categories among these certified defences. They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network (referred to as a “convex adversarial polytope”) and the resulting bounds are tractable for robust optimization. However, these methods are both computationally and memory intensive, and can increase model training time by a factor of hundreds. On the other hand, interval bound propagation (IBP) is a simple and efficient method for training verifiable neural networks (Gowal et al., 2018), which achieved state-of-the-art verified error on many datasets. However, since the IBP bounds are very loose during the initial phase of training, the training procedure can be unstable and sensitive to hyperparameters.\nIn this paper, we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods. Then we propose a new certified robust training method, CROWN-IBP, which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound, CROWN (Zhang et al., 2018). CROWN-IBP bound propagation involves a IBP based fast forward bounding pass, and a tight convex relaxation based backward bounding pass (CROWN) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions. Additional, CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods.\nThe efficiency, tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with `∞ robustness under all settings on MNIST and CIFAR10 datasets. In our experiment, on MNIST dataset we reach 7.02% and 12.06% IBP verified error under `∞ distortions = 0.3 and = 0.4, respectively, outperforming the state-of-the-art baseline results by IBP (8.55% and 15.01%). On CIFAR-10, at = 2255 , CROWN-IBP decreases the verified error from 55.88% (IBP) to 46.03% and matches convex relaxation based methods; at a larger , CROWN-IBP outperforms all other methods with a noticeable margin." }, { "heading": "2 RELATED WORK AND BACKGROUND", "text": "" }, { "heading": "2.1 ROBUSTNESS VERIFICATION AND RELAXATIONS OF NEURAL NETWORKS", "text": "Neural network robustness verification algorithms seek for upper and lower bounds of an output neuron for all possible inputs within a set S, typically a norm bounded perturbation. Most importantly, the margins between the ground-truth class and any other classes determine model robustness. However, it has already been shown that finding the exact output range is a non-convex problem and NP-complete (Katz et al., 2017; Weng et al., 2018). Therefore, recent works resorted to giving relatively tight but computationally tractable bounds of the output range with necessary relaxations of the original problem. Many of these robustness verification approaches are based on linear relaxations of non-linear units in neural networks, including CROWN (Zhang et al., 2018), DeepPoly (Singh et al., 2019), Fast-Lin (Weng et al., 2018), DeepZ (Singh et al., 2018) and Neurify (Wang et al., 2018b). We refer the readers to (Salman et al., 2019b) for a comprehensive survey on this topic. After linear relaxation, they bound the output of a neural network fi(·) by linear upper/lower hyper-planes:\nAi,:∆x + bL ≤ fi(x0 + ∆x) ≤ Ai,:∆x + bU (1)\nwhere a row vector Ai,: = W (L) i,: D (L−1)W(L−1) · · ·D(1)W(1) is the product of the network weight matrices W(l) and diagonal matrices D(l) reflecting the ReLU relaxations for output neuron i; bL and bU are two bias terms unrelated to ∆x. Additionally, Dvijotham et al. (2018c;a); Qin et al. (2019) solve the Lagrangian dual of verification problem; Raghunathan et al. (2018a;b); Dvijotham et al. propose semidefinite relaxations which are tighter compared to linear relaxation based methods, but computationally expensive. Bounds on neural network local Lipschitz constant can also be used for verification (Zhang et al., 2019c; Hein & Andriushchenko, 2017). Besides these deterministic verification approaches, randomized smoothing can be used to certify the robustness of any model in a probabilistic manner (Cohen et al., 2019; Salman et al., 2019a; Lecuyer et al., 2018; Li et al., 2018)." }, { "heading": "2.2 ROBUST OPTIMIZATION AND VERIFIABLE ADVERSARIAL DEFENSE", "text": "To improve the robustness of neural networks against adversarial perturbations, a natural idea is to generate adversarial examples by attacking the network and then use them to augment the training set (Kurakin et al., 2017). More recently, Madry et al. (2018) showed that adversarial training can\nbe formulated as solving a minimax robust optimization problem as in (2). Given a model with parameter θ, loss function L, and training data distribution X , the training algorithm aims to minimize the robust loss, which is defined as the maximum loss within a neighborhood {x+ δ|δ ∈ S} of each data point x, leading to the following robust optimization problem:\nmin θ E (x,y)∈X [ max δ∈S L(x+ δ; y; θ) ] . (2)\nMadry et al. (2018) proposed to use projected gradient descent (PGD) to approximately solve the inner max and then use the loss on the perturbed example x + δ to update the model. Networks trained by this procedure achieve state-of-the-art test accuracy under strong attacks (Athalye et al., 2018; Wang et al., 2018a; Zheng et al., 2018). Despite being robust under strong attacks, models obtained by this PGD-based adversarial training do not have verified error guarantees. Due to the nonconvexity of neural networks, PGD attack can only compute the lower bound of robust loss (the inner maximization problem). Minimizing a lower bound of the inner max cannot guarantee (2) is minimized. In other words, even if PGD-attack cannot find a perturbation with large loss, that does not mean there exists no such perturbation. This becomes problematic in safety-critical applications since those models need certified safety.\nVerifiable adversarial training methods, on the other hand, aim to obtain a network with good robustness that can be verified efficiently. This can be done by combining adversarial training and robustness verification—instead of using PGD to find a lower bound of inner max, certified adversarial training uses a verification method to find an upper bound of the inner max, and then update the parameters based on this upper bound of robust loss. Minimizing an upper bound of the inner max guarantees to minimize the robust loss. There are two certified robust training methods that are related to our work and we describe them in detail below.\nLinear Relaxation Based Verifiable Adversarial Training. One of the most popular verifiable adversarial training method was proposed in (Wong & Kolter, 2018) using linear relaxations of neural networks to give an upper bound of the inner max. Other similar approaches include Mirman et al. (2018); Wang et al. (2018a); Dvijotham et al. (2018b). Since the bound propagation process of a convex adversarial polytope is too expensive, several methods were proposed to improve its efficiency, like Cauchy projection (Wong et al., 2018) and dynamic mixed training (Wang et al., 2018a). However, even with these speed-ups, the training process is still slow. Also, this method may significantly reduce a model’s standard accuracy (accuracy on natural, unmodified test set). As we will demonstrate shortly, we find that this method tends to over-regularize the network during training, which is harmful for obtaining good accuracy.\nInterval Bound Propagation (IBP). Interval Bound Propagation (IBP) uses a very simple rule to compute the pre-activation outer bounds for each layer of the neural network. Unlike linear relaxation based methods, IBP does not relax ReLU neurons and does not consider the correlations between neurons of different layers, yielding much looser bounds. Mirman et al. (2018) proposed a variety of abstract domains to give sound over-approximations for neural networks, including the “Box/Interval Domain” (referred to as IBP in Gowal et al. (2018)) and showed that it could scale to much larger networks than other works (Raghunathan et al., 2018a) could at the time. Gowal et al. (2018) demonstrated that IBP could outperform many state-of-the-art results by a large margin with more precise approximations for the last linear layer and better training schemes. However, IBP can be unstable to use and hard to tune in practice, since the bounds can be very loose especially during the initial phase of training, posing a challenge to the optimizer. To mitigate instability, Gowal et al. (2018) use a mixture of regular and minimax robust cross-entropy loss as the model’s training loss." }, { "heading": "3 METHODOLOGY", "text": "Notation. We define an L-layer feed-forward neural network recursively as:\nf(x) = z(L) z(l) = W(l)h(l−1) + b(l) W(l) ∈ Rnl×nl−1 b(l) ∈ Rnl\nh(l) = σ(l)(z(l)), ∀l ∈ {1, . . . , L− 1},\nwhere h(0)(x) = x, n0 represents input dimension and nL is the number of classes, σ is an elementwise activation function. We use z to represent pre-activation neuron values and h to represent\npost-activation neuron values. Consider an input example xk with ground-truth label yk, we define a set of S(xk, ) = {x|‖x − xk‖∞ ≤ } and we desire a robust network to have the property yk = argmaxj [f(x)]j for all x ∈ S. We define element-wise upper and lower bounds for z(l) and h(l) as z(l) ≤ z(l) ≤ z(l) and h(l) ≤ h(l) ≤ h(l).\nVerification Specifications. Neural network verification literature typically defines a specification vector c ∈ RnL , that gives a linear combination for neural network output: c>f(x). In robustness verification, typically we set ci = 1 where i is the ground truth class label, cj = −1 where j is the attack target label and other elements in c are 0. This represents the margin between class i and class j. For an nL class classifier and a given label y, we define a specification matrix C ∈ RnL×nL as:\nCi,j = 1, if j = y, i 6= y (output of ground truth class) −1, if i = j, i 6= y (output of other classes, negated) 0, otherwise (note that the y-th row contains all 0)\n(3)\nImportantly, each element in vector m := Cf(x) ∈ RnL gives us margins between class y and all other classes. We define the lower bound of Cf(x) for all x ∈ S(xk, ) as m(xk, ), which is a very important quantity: when all elements of m(xk, ) > 0, xk is verifiably robust for any perturbation with `∞ norm less than . m(xk, ) can be obtained by a neural network verification algorithm, such as convex adversarial polytope, IBP, or CROWN. Additionally, Wong & Kolter (2018) showed that for cross-entropy (CE) loss:\nmax x∈S(xk, )\nL(f(x); y; θ) ≤ L(−m(xk, ); y; θ). (4)\n(4) gives us the opportunity to solve the robust optimization problem (2) via minimizing this tractable upper bound of inner-max. This guarantees that maxx∈S(xk, ) L(f(x), y) is also minimized." }, { "heading": "3.1 ANALYSIS OF IBP AND LINEAR RELAXATION BASED VERIFIABLE TRAINING METHODS", "text": "Interval Bound Propagation (IBP) Interval Bound Propagation (IBP) uses a simple bound propagation rule. For the input layer we set xL ≤ x ≤ xU element-wise. For affine layers we have:\nz(l) = W(l) h (l−1) + h(l−1)\n2 + |W(l)|h\n(l−1) − h(l−1)\n2 + b(l) (5)\nz(l) = W(l) h (l−1) + h(l−1)\n2 − |W(l)|h\n(l−1) − h(l−1)\n2 + b(l) (6)\nwhere |W(l)| takes element-wise absolute value. Note that h(0) = xU and h(0) = xL2. And for element-wise monotonic increasing activation functions σ,\nh (l) = σ(z(l)) h(l) = σ(z(l)). (7)\n1We implemented CROWN with efficient CNN support on GPUs: https://github.com/huanzhang12/CROWN-IBP 2For inputs bounded with general norms, IBP can be applied as long as this norm can be converted to per-neuron intervals after the first affine layer. For example, for `p norms (1 ≤ p ≤ ∞) Hölder’s inequality can be applied at the first affine layer to obtain h (1) and h(1), and IBP rule for later layers do not change.\nWe found that IBP can be viewed as training a simple augmented ReLU network which is friendly to optimizers (see Appendix A for more discussions). We also found that a network trained using IBP can obtain good verified errors when verified using IBP, but it can get much worse verified errors using linear relaxation based verification methods, including convex adversarial polytope (CAP) by Wong & Kolter (2018) (equivalently, Fast-Lin by Weng et al. (2018)) and CROWN (Zhang et al., 2018). Table 1 demonstrates that this gap can be very large on large .\nHowever, IBP is a very loose bound during the initial phase of training, which makes training unstable and hard to tune; purely using IBP frequently leads to divergence. Gowal et al. (2018) proposed to use a schedule where is gradually increased during training, and a mixture of robust cross-entropy loss with natural cross-entropy loss as the objective to stabilize training:\nmin θ E (x,y)∈X\n[ κL(x; y; θ) + (1− κ)L(−mIBP(x, ); y; θ) ] , (8)\nIssues with linear relaxation based training. Since IBP hugely outperforms linear relaxation based methods in the recent work (Gowal et al., 2018) in many settings, we want to understand what is going wrong with linear relaxation based methods. We found that, empirically, the norm of the weights in the models produced by linear relaxation based methods such as (Wong & Kolter, 2018) and (Wong et al., 2018) does not change or even decreases during training.\nIn Figure 1 we train a small 4-layer MNIST model and we linearly increase from 0 to 0.3 in 60 epochs. We plot the `∞ induced norm of the 2nd CNN layer during the training process of CROWN-IBP and (Wong et al., 2018). The norm of weight matrix using (Wong et al., 2018) does not increase. When becomes larger (roughly at = 0.2, epoch 40), the norm even starts to decrease slightly, indicating that the model is forced to learn smaller norm weights. Meanwhile, the verified error also starts to ramp up possibly due to the lack of capacity. We conjecture that linear relaxation based training over-regularizes the model, especially at a larger . However, in CROWN-IBP, the norm of weight matrices keep increasing during the training process, and verifiable error does not significantly increase when reaches 0.3.\nAnother issue with current linear relaxation based training or verification methods is their high computational and memory cost, and poor scalability. For the small network\nin Figure 1, convex adversarial polytope (with 50 random Cauchy projections) is 8 times slower and takes 4 times more memory than CROWN-IBP (without using random projections). Convex adversarial polytope scales even worse for larger networks; see Appendix J for a comparison." }, { "heading": "3.2 THE PROPOSED ALGORITHM: CROWN-IBP", "text": "Overview. We have reviewed IBP and linear relaxation based methods above. As shown in Gowal et al. (2018), IBP performs well at large with much smaller verified error, and also efficiently scales to large networks; however, it can be sensitive to hyperparameters due to its very imprecise bound at the beginning phase of training. On the other hand, linear relaxation based methods can give tighter lower bounds at the cost of high computational expenses, but it over-regularizes the network at large and forbids us to achieve good standard and verified accuracy. We propose CROWN-IBP, a new certified defense where we optimize the following problem (θ represents the network parameters):\nmin θ E (x,y)∈X [ κL(x; y; θ)︸ ︷︷ ︸\nnatural loss\n+(1− κ)L ( − ( IBP bound︷ ︸︸ ︷ (1− β)mIBP(x, ) + CROWN-IBP bound︷ ︸︸ ︷ (βmCROWN-IBP(x, )); y; θ )︸ ︷︷ ︸ robust loss ] ,\n(9) where our lower bound of margin m(x, ) is a combination of two bounds with different natures: IBP, and a CROWN-style bound (which will be detailed below); L is the cross-entropy loss. Note that the combination is inside the loss function and is thus still a valid lower bound; thus (4) still holds and we are within the minimax robust optimization theoretical framework. Similar to IBP and\nTRADES (Zhang et al., 2019a), we use a mixture of natural and robust training loss with parameter κ, allowing us to explicitly trade-off between clean accuracy and verified accuracy.\nIn a high level, the computation of the lower bounds of CROWN-IBP (mCROWN-IBP(x, )) consists of IBP bound propagation in a forward bounding pass and CROWN-style bound propagation in a backward bounding pass. We discuss the details of CROWN-IBP algorithm below.\nForward Bound Propagation in CROWN-IBP. In CROWN-IBP, we first obtain z(l) and z(l) for all layers by applying (5), (6) and (7). Then we will obtain mIBP(x, ) = z\n(L) (assuming C is merged into W(L)). The time complexity is comparable to two forward propagation passes of the network.\nLinear Relaxation of ReLU neurons Given z(l) and z(l) computed in the previous step, we first check if some neurons are always active (z(l)k > 0) or always inactive (z (l) k < 0), since they are effectively linear and no relaxations are needed. For the remaining unstable neurons, Zhang et al. (2018); Wong & Kolter (2018) give a linear relaxation for ReLU activation function:\nαkz (l) k ≤ σ(z (l) k ) ≤\nz (l) k\nz (l) k − z (l) k\nz (l) k −\nz (l) k z (l) k\nz (l) k − z (l) k\n, for all k ∈ [nl] and z(l)k < 0 < z (l) k , (10)\nwhere 0 ≤ αk ≤ 1; Zhang et al. (2018) propose to adaptively select αk = 1 when z(l)k > |z (l) k | and 0 otherwise, which minimizes the relaxation error. Following (10), for an input vector z(l), we effectively replace the ReLU layer with a linear layer, giving upper or lower bounds of the output:\nD(l)z(l) ≤ σ(z(l)) ≤ D(l)z(l) + c(l)d (11)\nwhere D(l) and D (l)\nare two diagonal matrices representing the “weights” of the relaxed ReLU layer. Other general activation functions can be supported similarly. In the following we focus on conceptually presenting the algorithm, while more details of each term can be found in the Appendix.\nBackward Bound Propagation in CROWN-IBP. Unlike IBP, CROWN-style bounds start bounding from the last layer, so we refer to it as backward bound propagation (not to be confused with the back-propagation algorithm to obtain gradients). Suppose we want to obtain the lower bound [mCROWN-IBP(x, )]i := z (L) i (we assume the specification matrix C has been merged into W\n(L)). The input to layer W(L) is σ(z(L−1)), which can be bounded linearly by Eq. (11). CROWN-style bounds choose the lower bound of σ(z(L−1)k ) (LHS of (11)) when W (L) i,k is positive, and choose the upper bound otherwise.We then merge W(L) and the linearized ReLU layer together and define:\nA (L−1) i,: = W (L) i,: D i,(L−1), where Di,(L−1)k,k =\n{ D\n(L−1) k,k , if W (L) i,k > 0\nD (L−1) k,k , if W (L) i,k ≤ 0\n(12)\nNow we have a lower bound z(L)i = A (L−1) i,: z (L−1) + b (L−1) i ≤ z (L) i where b (L−1) i =∑\nk,W (L) i,k <0\nW (L) i,k c (l) k + b (L) collects all terms not related to z(L−1). Note that the diagonal matrix\nDi,(L−1) implicitly depends on i. Then, we merge A(L−1)i,: with the next linear layer, which is straight forward by plugging in z(L−1) = W(L−1)σ(z(L−2)) + b(L−1):\nz (L) i ≥ A (L−1) i,: W (L−1)σ(z(L−2)) + A (L−1) i,: b (L−1) + b (L−1) i .\nThen we continue to unfold the next ReLU layer σ(z(L−2)) using its linear relaxations, and compute a new A(L−2) ∈ RnL×nL−2 matrix, with A(L−2)i,: = A (L−1) i,: W\n(L−1)Di,(L−2) in a similar manner as in (12). Along with the bound propagation process, we need to compute a series of matrices, A(L−1), · · · ,A(0), where A(l)i,: = A (l+1) i,: W (l+1)Di,(l) ∈ RnL×n(l) , and A(0)i,: = A (1) i,: W (1) = W (L) i,: D\ni,(L−1)W(L−2)Di,(L−2)A(L−2) · · ·Di,(1)W(1). At this point, we merged all layers of the network into a linear layer: z(L)i ≥ A (0) i,: x + b, where b collects all terms not related to x. A lower bound for z(L)i with xL ≤ x ≤ xU can then be easily given as\n[mCROWN-IBP]i ≡ z (L) i = A (0) i,: x + b ≥ ∑ k,A\n(0) i,k<0\nA (0) i,kxU,k + ∑ k,A\n(0) i,k>0\nA (0) i,kxL,k + b (13)\nFor ReLU networks, convex adversarial polytope (Wong & Kolter, 2018) uses a very similar bound propagation procedure. CROWN-style bounds allow an adaptive selection of αi in (10), thus often gives better bounds (e.g., see Table 1). We give details on each term in Appendix L.\nComputational Cost. Ordinary CROWN (Zhang et al., 2018) and convex adversarial polytope (Wong & Kolter, 2018) use (13) to compute all intermediate layer’s z(m)i and z (m) i (m ∈ [L]), by considering W(m) as the final layer of the network. For each layer m, we need a different set of mA matrices, defined as Am,(l), l ∈ {m− 1, · · · , 0}. This causes three computational issues: • Unlike the last layer W(L), an intermediate layer W(m) typically has a much larger output dimension nm nL thus all Am,(l) ∈ {Am,(m−1), · · · ,Am,(0)} have large dimensions Rnm×nl . • Computation of all Am,(l) matrices is expensive. Suppose the network has n neurons for all L− 1 intermediate and input layers and nL n neurons for the output layer (assuming L ≥ 2), the time complexity of ordinary CROWN or convex adversarial polytope is O( ∑L−2 l=1 ln\n3 + (L− 1)nLn2) = O((L− 1)2n3 + (L− 1)nLn2) = O(Ln2(Ln+ nL)). A ordinary forward propagation only takes O(Ln2) time per example, thus ordinary CROWN does not scale up to large networks for training, due to its quadratic dependency in L and extra Ln times overhead.\n• When both W(l) and W(l−1) represent convolutional layers with small kernel tensors K(l) and K(l−1), there are no efficient GPU operations to form the matrix W(l)D(l−1)W(l−1) using K(l) and K(l−1). Existing implementations either unfold at least one of the convolutional kernels to fully connected weights, or use sparse matrices to represent W(l) and W(l−1). They suffer from poor hardware efficiency on GPUs.\nIn CROWN-IBP, we use IBP to obtain bounds of intermediate layers, which takes only twice the regular forward propagate time (O(Ln2)), thus we do not have the first and second issues. The time complexity of the backward bound propagation in CROWN-IBP is O((L− 1)nLn2), only nL times slower than forward propagation and significantly more scalable than ordinary CROWN (which is Ln times slower than forward propagation, where typically n nL). The third convolution issue is also not a concern, since we start from the last specification layer W(L) which is a small fully connected layer. Suppose we need to compute W(L)D(L−1)W(L−1) and W(L−1) is a convolutional layer with kernel K(L−1), we can efficiently compute (W(L−1)>(D(L−1)W(L)>))> on GPUs using the transposed convolution operator with kernel K(L−1), without unfolding any convoluational layers. Conceptually, the backward pass of CROWN-IBP propagates a small specification matrix W(L) backwards, replacing affine layers with their transposed operators, and activation function layers with a diagonal matrix product. This allows efficient implementation and better scalability.\nBenefits of CROWN-IBP. Tightness, efficiency and flexibility are unique benefits of CROWN-IBP:\n• CROWN-IBP is based on CROWN, a tight linear relaxation based lower bound which can greatly improve the quality of bounds obtained by IBP to guide verifiable training and improve stabability; • CROWN-IBP avoids the high computational cost of convex relaxation based methods : the time complexity is reduced from O(Ln2(Ln + nL)) to O(Ln2nL), well suited to problems where the output size nL is much smaller than input and intermediate layers’ sizes; also, there is no quadratic dependency on L. Thus, CROWN-IBP is efficient on relatively large networks; • The objective (9) is strictly more general than IBP and allows the flexibility to exploit the strength from both IBP (good for large ) and convex relaxation based methods (good for small ). We can slowly decrease β to 0 during training to avoid the over-regularization problem, yet keeping the initial training of IBP more stable by providing a much tighter bound; we can also keep β = 1 which helps to outperform convex relaxation based methods in small regime (e.g., = 2/255 on CIFAR-10)." }, { "heading": "4 EXPERIMENTS", "text": "Models and training schedules. We evaluate CROWN-IBP on three models that are similar to the models used in (Gowal et al., 2018) on MNIST and CIFAR-10 datasets with different `∞ perturbation norms. Here we denote the small, medium and large models in Gowal et al. (2018) as DM-small, DM-medium and DM-large. During training, we first warm up (regular training without robust loss)\nfor a fixed number of epochs and then increase from 0 to train using a ramp-up schedule ofR epochs. Similar techniques are also used in many other works (Wong et al., 2018; Wang et al., 2018a; Gowal et al., 2018). For both IBP and CROWN-IBP, a natural cross-entropy (CE) loss with weight κ (as in Eq (9)) may be added, and κ is scheduled to linearly decrease from κstart to κend within R ramp-up epochs. Gowal et al. (2018) used κstart = 1 and κend = 0.5. To understand the trade-off between verified accuracy and standard (clean) accuracy, we explore two more settings: κstart = κend = 0 (without natural CE loss) and κstart = 1, κend = 0. For β, a linear schedule during the ramp-up period is used, but we always set βstart = 1 and βend = 0, except that we set βstart = βend = 1 for CIFAR-10 at = 2255 . Detailed model structures and hyperparameters are in Appendix C. Our training code for IBP and CROWN-IBP, and pre-trained models are publicly available 3.\nMetrics. Verified error is the percentage of test examples where at least one element in the lower bounds m(xk, ) is < 0. It is an guaranteed upper bound of test error under any `∞ perturbations. We obtain m(xk, ) using IBP or CROWN-IBP (Eq. 13). We also report standard (clean) errors and errors under 200-step PGD attack. PGD errors are lower bounds of test errors under `∞ perturbations.\nComparison to IBP. Table 2 represents the standard, verified and PGD errors under different for each dataset with different κ settings. We test CROWN-IBP on the same model structures in Table 1 of Gowal et al. (2018). These three models’ architectures are presented in Table A in the Appendix. Here we only report the DM-large model structure in as it performs best under all setttings; small and medium models are deferred to Table C in the Appendix. When both κstart = κend = 0, no natural CE loss is added and the model focuses on minimizing verified error, but the lack of natural CE loss may lead to unstable training, especially for IBP; the κstart = 1, κend = 0.5 setting emphasizes on minimizing standard error, usually at the cost of slightly higher verified error rates. κstart = 1, κend = 0 typically achieves the best balance. We can observe that under the same κ settings, CROWN-IBP outperforms IBP in both standard error and verified error. The benefits of CROWN-IBP is significant especially when model is large and is large. We highlight that CROWN-IBP reduces the verified error rate obtained by IBP from 8.21% to 7.02% on MNIST at = 0.3 and from 55.88% to 46.03% on CIFAR-10 at = 2/255 (it is the first time that an IBP based method outperforms results from (Wong et al., 2018), and our model also has better standard error). We also note that we are the first to obtain verifiable bound on CIFAR-10 at = 16/255.\nTrade-off Between Standard Accuracy and Verified Accuracy. To show the trade-off between standard and verified accuracy, we evaluate DM-large CIFAR-10 model with test = 8/255 under different κ settings, while keeping all other hyperparameters unchanged. For each κend = {0.5, 0.25, 0}, we uniformly choose 11 κstart ∈ [1, κend] while keeping all other hyper-parameters unchanged. A larger κstart or κend tends to produce better standard errors, and we can explicitly control the trade-off between standard accuracy and verified accuracy. In Figure 2 we plot the standard and verified errors of IBP and CROWN-IBP trained models with different κ settings. Each cluster on the figure has 11 points, representing 11 different κstart values. Models with lower verified errors tend to have higher standard errors. However, CROWN-IBP clearly outperforms IBP with improvement on both standard and verified accuracy, and\npushes the Pareto front towards the lower left corner, indicating overall better performance. To reach the same verified error of 70%, CROWN-IBP can reduce standard error from roughly 55% to 45%.\nTraining Stability. To discourage hand-tuning on a small set of models and demonstrate the stability of CROWN-IBP over a broader range of models, we evaluate IBP and CROWN-IBP on a variety of small and medium sized model architectures (18 for MNIST and 17 for CIFAR-10), detailed in Appendix D. To evaluate training stability, we compare verified errors under different ramp-up schedule length (R = {30, 60, 90, 120} on CIFAR-10 and R = {10, 15, 30, 60} on MNIST)\n3TensorFlow implementation and pre-trained models: https://github.com/deepmind/interval-bound-propagation/ PyTorch implementation and pre-trained models: https://github.com/huanzhang12/CROWN-IBP\nand different κ settings. Instead of reporting just the best model, we compare the best, worst and median verified errors over all models. Our results are presented in Figure 3: (a) is for MNIST with = 0.3; (c),(d) are for CIFAR with = 8/255. We can observe that CROWN-IBP achieves better performance consistently under different schedule length. In addition, IBP with κ = 0 cannot stably converge on all models when schedule is short; under other κ settings, CROWN-IBP always performs better. We conduct additional training stability experiments on MNIST and CIFAR-10 dataset under other model and settings and the observations are similar (see Appendix H)." }, { "heading": "5 CONCLUSIONS", "text": "We propose a new certified defense method, CROWN-IBP, by combining the fast interval bound propagation (IBP) bound and a tight linear relaxation based bound, CROWN. Our method enjoys high computational efficiency provided by IBP while facilitating the tight CROWN bound to stabilize training under the robust optimization framework, and provides the flexibility to trade-off between the two. Our experiments show that CROWN-IBP consistently outperforms other IBP baselines in both standard errors and verified errors and achieves state-of-the-art verified test errors for `∞ robustness." }, { "heading": "B TIGHTNESS COMPARISON BETWEEN IBP AND CROWN-IBP", "text": "Both IBP and CROWN-IBP produce lower bounds m(x, ), and a larger lower bound has better quality. To measure the relative tightness of the two bounds, we take the average of all bounds of training examples:\nE (x,y)∈X\n1\nnL 1>(mCROWN-IBP(x, )−mIBP(x, ))\nA positive value indicates that CROWN-IBP is tighter than IBP. In Figure B we plot this averaged bound differences during schedule for one MNIST model and one CIFAR-10 model. We can observe that during the early phase of training when the schedule just starts, CROWN-IBP produces significantly better bounds than IBP. A tighter lower bound m(x, ) gives a tighter upper bound for maxδ∈S L(x+ δ; y; θ), making the minimax optimization problem (2) more effective to solve. When the training schedule proceeds, the model gradually learns how to make IBP bounds tighter and eventually the difference between the two bounds become close to 0.\n20 40 60 8010 Epoch\n0\n10\n20 30 Bo un d Di ffe re nc\ne MNIST\n320 800 1200 1600 2000 Epoch\n0.0\n2.5\n5.0\n7.5\n10.0\n12.5\n15.0 CIFAR\nFigure B: Bound differences between IBP and CROWN-IBP for DM-large models during training. The bound difference is only computed during the schedule (epoch 10 to 60 for MNIST, and 320 to 1920 for CIFAR-10), as we don’t compute CROWN-IBP bounds in warmup period and after schedule.\nWhy CROWN-IBP stabilizes IBP training? When taking a randomly initialized network or a naturally trained network, IBP bounds are very loose. But in Table 1, we show that a network trained using IBP can eventually obtain quite tight IBP bounds and high verified accuracy; the network can adapt to IBP bounds and learn a specific set of weights to make IBP tight and also correctly classify examples. However, since the training has to start from weights that produce loose bounds for IBP, the beginning phase of IBP training can be challenging and is vitally important.\nWe observe that IBP training can have a large performance variance across models and initializations. Also IBP is more sensitive to hyper-parameter like κ or schedule length; in Figure 3, many IBP models converge sub-optimally (large worst/median verified error). The reason for instability is that during the beginning phase of training, the loose bounds produced by IBP make the robust loss (9) ineffective, and it is challenging for the optimizer to reduce this loss and find a set of good weights that produce tight IBP verified bounds in the end.\nConversely, if our bounds are much tighter at the beginning, the robust loss (9) always remains in a reasonable range during training, and the network can gradually learn to find a good set of weights that make IBP bounds increasingly tighter (this is obvious in Figure B). Initially, tighter bounds can be provided by a convex relaxation based method like CROWN, and they are gradually replaced by IBP bounds (using βstart = 1, βend = 0), eventually leading to a model with learned tight IBP bounds in the end." }, { "heading": "C MODELS AND HYPERPARAMETERS FOR COMPARISON TO IBP", "text": "The goal of these experiments is to reproduce the performance reported in (Gowal et al., 2018) and demonstrate the advantage of CROWN-IBP under the same experimental settings. Specifically, to reproduce the IBP results, for CIFAR-10 we train using a large batch size and long training schedule on TPUs (we can also replicate these results on multi-GPUs using a reasonable amount of training time; see Section F). Also, for this set of experiments we use the same code base as in Gowal et al. (2018). For model performance on a comprehensive set of small and medium sized models trained on a single GPU, please see Table D in Section F, as well as the training stability experiments in Section 4 and Section H.\nThe models structures (DM-small, DM-medium and DM-large) used in Table C and Table 2 are listed in Table A. These three model structures are the same as in Gowal et al. (2018). Training hyperparameters are detailed below:\n• For MNIST IBP baseline results, we follow exact the same set of hyperparameters as in (Gowal et al., 2018). We train 100 epochs (60K steps) with a batch size of 100, and use a warm-up and ramp-up duration of 2K and 10K steps. Learning rate for Adam optimizer is set to 1× 10−3 and decayed by 10X at steps 15K and 25K. Our IBP results match their reported numbers. Note that we always use IBP verified errors rather than MIP verified errors. We use the same schedule for CROWN-IBP with train = 0.2 ( test = 0.1) in Table C and Table 2. For train = 0.4, this schedule can obtain verified error rates 4.22%, 7.01% and 12.84% at test = {0.2, 0.3, 0.4} using the DM-Large model, respectively.\n• For MNIST CROWN-IBP with train = 0.4 in Table C and Table 2, we train 200 epochs with a batch size of 256. We use Adam optimizer and set learning rate to 5× 10−4. We warm up with 10 epochs’ regular training, and gradually ramp up from 0 to train in 50 epochs. We reduce the learning rate by 10X at epoch 130 and 190. Using this schedule, IBP’s performance becomes worse (by about 1-2% in all settings), but this schedule improves verified error for CROWN-IBP at test = 0.4 from 12.84% to to 12.06% and does do affect verified errors at other test levels.\n• For CIFAR-10, we follow the setting in Gowal et al. (2018) and train 3200 epochs on 32 TPU cores. We use a batch size of 1024, and a learning rate of 5 × 10−4. We warm up for 320 epochs, and ramp-up for 1600 epochs. Learning rate is reduced by 10X at epoch 2600 and 3040. We use random horizontal flips and random crops as data augmentation, and normalize images according to per-channel statistics. Note that this schedule is slightly different from the schedule used in (Gowal et al., 2018); we use a smaller batch size due to TPU memory constraints (we used TPUv2 which has half memory capacity as TPUv3 used in (Gowal et al., 2018)), and also we decay learning rates later. We found that this schedule improves both IBP baseline performance and CROWN-IBP performance by around 1%; for example, at = 8/255, this improved schedule can reduce verified error from 73.52% to 72.68% for IBP baseline (κstart = 1.0, κend = 0.5) using the DM-Large model.\nHyperparameter κ and β. We use a linear schedule for both hyperparameters, decreasing κ from κstart to κend while increasing β from βstart to βend. The schedule length is set to the same length as the schedule.\nIn both IBP and CROWN-IBP, a hyperparameter κ is used to trade-off between clean accuracy and verified accuracy. Figure 2 shows that κend can significantly affect the trade-off, while κstart has minor impacts compared to κend. In general, we recommend κstart = 1 and κend = 0 as a safe starting point, and we can adjust κend to a larger value if a better standard accuracy is desired. The setting κstart = κend = 0 (pure minimax optimization) can be challenging for IBP as there is no natural loss as a stabilizer; under this setting CROWN-IBP usually produces a model with good (sometimes best) verified accuracy but noticeably worse standard accuracy (on CIFAR-10 = 8255 the difference can be as large as 10%), so this setting is only recommended when a model with best verified accuracy is desired at a cost of noticeably reduced standard accuracy.\nCompared to IBP, CROWN-IBP adds one additional hyperparameter, β. β has a clear meaning: balancing between the convex relaxation based bounds and the IBP bounds. βstart is always set to 1 as we want to use CROWN-IBP to obtain tighter bounds to stabilize the early phase of training when IBP bounds are very loose; βend determines if we want to use a convex relaxation based bound (βend = 1) or IBP based bound (βend = 0) after the schedule. Thus, we set βend = 1 for the case where convex relaxation based method (Wong et al., 2018) can outperform IBP (e.g., CIFAR-10 = 2/255, and βend = 0 for the case where IBP outperforms convex relaxation based bounds. We do not tune or grid-search this hyperparameter.\nDM-Small DM-Medium DM-Large\nCONV 16 4×4+2 CONV 32 3×3+1 CONV 64 3×3+1 CONV 32 4×4+1 CONV 32 4×4+2 CONV 64 3×3+1\nFC 100 CONV 64 3×3+1 CONV 128 3×3+2 CONV 64 4×4+2 CONV 128 3×3+1\nFC 512 CONV 128 3×3+1 FC 512 FC 512\nTable A: Model structures from Gowal et al. (2018). “CONV k w×h+s” represents a 2D convolutional layer with k filters of size w×h using a stride of s in both dimensions. “FC n” = fully connected layer with n outputs. Last fully connected layer is omitted. All networks use ReLU activation functions." }, { "heading": "D HYPERPARAMETERS AND MODEL STRUCTURES FOR TRAINING STABILITY EXPERIMENTS", "text": "In all our training stability experiments, we use a large number of relatively small models and train them on a single GPU. These small models cannot achieve state-of-the-art performance but they can be trained quickly and cheaply, allowing us to explore training stability over a variety of settings, and report min, median and max statistics. We use the following hyperparameters:\n• For MNIST, we train 100 epochs with batch size 256. We use Adam optimizer and the learning rate is 5× 10−4. The first epoch is standard training for warming up. We gradually increase linearly per batch in our training process with a schedule length of 60. We reduce the learning rate by 50% every 10 epochs after schedule ends. No data augmentation technique is used and the whole 28 × 28 images are used (normalized to 0 - 1 range).\n• For CIFAR, we train 200 epoch with batch size 128. We use Adam optimizer and the learning rate is 0.1%. The first 10 epochs are standard training for warming up. We gradually increase linearly per batch in our training process with a schedule length of 120. We reduce the learning rate by 50% every 10 epochs after schedule ends. We use random horizontal flips and random crops as data augmentation. The three channels are normalized with mean (0.4914, 0.4822, 0.4465) and standard deviation (0.2023, 0.1914, 0.2010). These numbers are per-channel statistics from the training set used in (Gowal et al., 2018).\nAll verified error numbers are evaluated on the test set using IBP, since the networks are trained using IBP (β = 0 after reaches the target train), except for CIFAR = 2255 where we set β = 1 to compute the CROWN-IBP verified error.\nTable B gives the 18 model structures used in our training stability experiments. These model structures are designed by us and are not used in Gowal et al. (2018). Most CIFAR-10 models share the same structures as MNIST models (unless noted on the table) except that their input dimensions are different. Model A is too small for CIFAR-10 thus we remove it for CIFAR-10 experiments. Models A - J are the “small models” reported in Figure 3. Models K - T are the “medium models” reported in Figure 3. For results in Table 1, we use a small model (model structure B) for all three datasets. These MNIST, CIFAR-10 models can be trained on a single NVIDIA RTX 2080 Ti GPU within a few hours each." }, { "heading": "E OMITTED RESULTS ON DM-SMALL AND DM-MEDIUM MODELS", "text": "In Table 2 we report results from the best DM-Large model. Table C presents the verified, standard (clean) and PGD attack errors for all three model structures used in (Gowal et al., 2018) (DM-Small, DM-Medium and DM-Large) trained on MNIST and CIFAR-10 datasets. We evaluate IBP and CROWN-IBP under the same three κ settings as in Table 2. We use hyperparameters detailed in Section C to train these models. We can see that given any model structure and any κ setting, CROWN-IBP consistently outperforms IBP.\nName Model Structure (all models have a last FC 10 layer, which are omitted)\nA (MNIST Only) Conv 4 4× 4+2, Conv 8 4× 4+2, FC 128 B Conv 8 4× 4+2, Conv 16 4× 4+2, FC 256 C Conv 4 3× 3+1, Conv 8 3× 3+1, Conv 8 4× 4+4, FC 64 D Conv 8 3× 3+1, Conv 16 3× 3+1, Conv 16 4× 4+4, FC 128 E Conv 4 5× 5+1, Conv 8 5× 5+1, Conv 8 5× 5+4, FC 64 F Conv 8 5× 5+1, Conv 16 5× 5+1, Conv 16 5× 5+4, FC 128 G Conv 4 3× 3+1, Conv 4 4× 4+2, Conv 8 3× 3+1, Conv 8 4× 4+2, FC 256, FC 256 H Conv 8 3× 3+1, Conv 8 4× 4+2, Conv 16 3× 3+1, Conv 16 4× 4+2, FC 256, FC 256 I Conv 4 3× 3+1, Conv 4 4× 4+2, Conv 8 3× 3+1, Conv 8 4× 4+2, FC 512, FC 512 J Conv 8 3× 3+1, Conv 8 4× 4+2, Conv 16 3× 3+1, Conv 16 4× 4+2, FC 512, FC 512 K Conv 16 3× 3+1, Conv 16 4× 4+2, Conv 32 3× 3+1, Conv 32 4× 4+2, FC 256, FC 256 L Conv 16 3× 3+1, Conv 16 4× 4+2, Conv 32 3× 3+1, Conv 32 4× 4+2, FC 512, FC 512 M Conv 32 3× 3+1, Conv 32 4× 4+2, Conv 64 3× 3+1, Conv 64 4× 4+2, FC 512, FC 512 N Conv 64 3× 3+1, Conv 64 4× 4+2, Conv 128 3× 3+1, Conv 128 4× 4+2, FC 512, FC 512 O(MNIST Only) Conv 64 5× 5+1, Conv 128 5× 5+1, Conv 128 4× 4+4, FC 512 P(MNIST Only) Conv 32 5× 5+1, Conv 64 5× 5+1, Conv 64 4× 4+4, FC 512 Q Conv 16 5× 5+1, Conv 32 5× 5+1, Conv 32 5× 5+4, FC 512 R Conv 32 3× 3+1, Conv 64 3× 3+1, Conv 64 3× 3+4, FC 512 S(CIFAR-10 Only) Conv 32 4× 4+2, Conv 64 4× 4+2, FC 128 T(CIFAR-10 Only) Conv 64 4× 4+2, Conv 128 4× 4+2, FC 256\nTable B: Model structures used in our training stability experiments. We use ReLU activations for all models. We omit the last fully connected layer as its output dimension is always 10. In the table, “Conv k w × w + s” represents to a 2D convolutional layer with k filters of size w × w and a stride of s. Model A - J are referred to as “small models” and model K to T are referred to as “medium models”." }, { "heading": "F ADDITIONAL EXPERIMENTS ON SMALLER MODELS USING A SINGLE GPU", "text": "In this section we present additional experiments on a variety of smaller MNIST and CIFAR-10 models which can be trained on a single GPU. The purpose of this experiment is to compare model performance statistics (min, median and max) on a wide range of models, rather than a few hand selected models. The model structures used in these experiments are detailed in Table B. In Table D, we present the best, median and worst verified and standard (clean) test errors for models trained on MNIST and CIFAR-10 using IBP and CROWN-IBP. Although these small models cannot achieve state-of-the-art performance, CROWN-IBP’s best, median and worst verified errors among all model structures consistently outperform those of IBP. Especially, in many situations the worst case verified error improves significantly using CROWN-IBP, because IBP training is not stable on some of the models.\nIt is worth noting that in this set of experiments we explore a different setting: train = test. We found that both IBP and CROWN-IBP tend to overfit to training dataset on MNIST with small , thus verified errors are not as good as presented in Table C. This overfitting issue can be alleviated by using train > test (as used in Table 2 and Table C), or using an explicit `1 regularization, which will be discussed in detail in Section I.\nTable C: The verified, standard (clean) and PGD attack errors for 3 models (DM-small, DM-medium, DM-large) trained on MNIST and CIFAR test sets. We evaluate IBP and CROWN-IBP under different κ schedules. CROWN-IBP outperforms IBP under the same κ setting, and also achieves state-of-the-art results for `∞ robustness on both MNIST and CIFAR datasets for all .\nDataset (`∞ norm) Training Method κ schedules DM-small model’s err. (%) DM-medium model’s err. (%) DM-large model’s err. (%) κstart κend Standard Verified PGD Standard Verified PGD Standard Verified PGD\nMNIST\nIBP 0 0 1.92 4.16 3.88 1.53 3.26 2.82 1.13 2.89 2.24 1 0.5 1.68 3.60 3.34 1.46 3.20 2.57 1.08 2.75 2.02\ntest = 0.1 1 0 2.14 4.24 3.94 1.48 3.21 2.77 1.14 2.81 2.11 train = 0.2\nCROWN-IBP 0 0 1.90 3.50 3.21 1.44 2.77 2.37 1.17 2.36 1.91 1 0.5 1.60 3.51 3.19 1.14 2.64 2.23 0.95 2.38 1.77 1 0 1.67 3.44 3.09 1.34 2.76 2.39 1.17 2.24 1.81\nIBP 0 0 5.08 9.80 9.36 3.68 7.38 6.77 3.45 6.46 6.00 1 0.5 3.83 8.64 8.06 2.55 5.84 5.33 2.12 4.75 4.24\ntest = 0.2 1 0 6.25 11.32 10.84 3.89 7.21 6.68 2.74 5.46 4.89 train = 0.4\nCROWN-IBP 0 0 3.78 6.61 6.40 3.84 6.65 6.42 2.84 5.15 4.90 1 0.5 2.96 6.11 5.74 2.37 5.35 4.90 1.82 4.13 3.81 1 0 3.55 6.29 6.13 3.16 5.82 5.44 2.17 4.31 3.99\nIBP 0 0 5.08 14.42 13.30 3.68 10.97 9.66 3.45 9.76 8.42 1 0.50 3.83 13.99 12.25 2.55 9.51 7.87 2.12 8.47 6.78\ntest = 0.3 1 0 6.25 16.51 15.07 3.89 10.4 9.17 2.74 8.73 7.37 train = 0.4\nCROWN-IBP 0 0 3.78 9.60 8.90 3.84 9.25 8.57 2.84 7.65 6.90 1 0.5 2.96 9.44 8.26 2.37 8.54 7.74 1.82 7.02 6.05 1 0 3.55 9.40 8.50 3.16 8.62 7.65 2.17 7.03 6.12\nIBP 0 0 5.08 23.40 20.15 3.68 18.34 14.75 3.45 16.19 12.73 1 0.5 3.83 24.16 19.97 2.55 16.82 12.83 2.12 15.37 11.05\ntest = 0.4 1 0 6.25 26.81 22.78 3.89 16.99 13.81 2.74 14.80 11.14 train = 0.4\nCROWN-IBP 0 0 3.78 15.21 13.34 3.84 14.58 12.69 2.84 12.74 10.39 1 0.5 2.96 16.04 12.91 2.37 14.97 12.47 1.82 12.59 9.58 1 0 3.55 15.55 13.11 3.16 14.19 11.31 2.17 12.06 9.47\nCIFAR-10\ntest = 2 255 4 train = 2.2 255 3\nIBP 0 0 44.66 56.38 54.15 39.12 53.86 49.77 38.54 55.21 49.72 1 0.5 38.90 57.94 53.64 34.19 56.24 49.63 33.77 58.48 50.54 1 0 44.08 56.32 54.16 39.30 53.68 49.74 39.22 55.19 50.40 CROWN-IBP 0 0 39.43 53.93 49.16 32.78 49.57 44.22 28.48 46.03 40.28 1 0.5 34.08 54.28 51.17 28.63 51.39 42.43 26.19 50.53 40.24 1 0 38.15 52.57 50.35 33.17 49.82 44.64 28.91 46.43 40.27\ntest = 8\n255 train = 8.8 255 3\nIBP 0 0 61.91 73.12 71.75 61.46 71.98 70.07 59.41 71.22 68.96 1 0.5 54.01 73.04 70.54 50.33 73.58 69.57 49.01 72.68 68.14 1 0 62.66 72.25 70.98 61.61 72.60 70.57 58.43 70.81 68.73 CROWN-IBP 0 0 59.94 70.76 69.65 59.17 69.00 67.60 54.02 66.94 65.42 1 0.5 53.12 73.51 70.61 48.51 71.55 67.67 45.47 69.55 65.74 1 0 60.84 72.47 71.18 58.19 68.94 67.72 55.27 67.76 65.71\ntest = 16 255 train = 17.6 255 3\nIBP 0 0 70.02 78.86 77.67 67.55 78.65 76.92 68.97 78.12 76.66 1 0.5 63.43 81.58 78.81 60.07 81.01 77.32 59.46 80.85 76.97 1 0 67.73 78.71 77.52 70.28 79.26 77.43 68.88 78.91 76.95 CROWN-IBP 0 0 67.42 78.41 76.86 68.06 77.92 76.89 67.17 77.27 75.76 1 0.5 61.47 79.62 77.13 59.56 79.30 76.43 56.73 78.20 74.87 1 0 68.75 78.71 77.91 67.94 78.46 77.21 66.06 76.80 75.23\n1 Verified errors reported in Table 4 of Gowal et al. (2018) are evaluated using mixed integer programming (MIP). For a fair comparison, we use the IBP verified errors reported in Table 3 of Gowal et al. (2018). 2 According to direct communication with the authors of Gowal et al. (2018), achieving 68.44% IBP verified error requires to adding an extra PGD adversarial training loss. Without adding PGD, the achievable verified error is 72.91% (LP/MIP verified) or 73.52% (IBP verified). 3 Although not explicitly mentioned, the best CIFAR-10 models in (Gowal et al., 2018) also use train = 1.1 test. 4 We use βstart = βend = 1 for this setting, the same as in Table 2, and thus CROWN-IBP bound is used to evaluate the verified error.\nTable D: Verified and standard (clean) test errors for a large number of models trained on MNIST and CIFAR-10 datasets using IBP and CROWN-IBP. The purpose of this experiment is to compare model performance statistics (min, median and max) on a wide range of models, rather than a few hand selected models. For each setting we report 3 representative models: the models with smallest, median, and largest verified error. We also report the standard error of these three selected models. Note that in this table we set train = test and observe overfitting on small for MNIST. See Section I for detailed discussions.\nDataset (`∞ norm) Model Family Training Method κ schedule Verified Test Error (%) Standard Test Error(%) κstart κend best median worst best median worst\nMNIST\n10 small models\nIBP 0 0 4.79 5.74 7.32 1.48 1.59 2.50 1 0 4.87 5.72 7.24 1.51 1.34 2.46 1 0.5 5.24 5.95 7.36 1.41 1.88 1.87 CROWN-IBP 0 0 4.21 5.18 6.80 1.41 1.83 2.58 1 0 4.14 5.24 6.82 1.39 2.06 2.46\ntrain = 0.1 1 0.5 4.62 5.94 6.88 1.26 1.88 1.97 test = 0.1\n8 medium models\nIBP 0 0 5.9 6.25 7.82 1.14 1.12 1.23 1 0 5.77 6.30 7.50 1.21 1.13 1.34 1 0.5 6.05 6.40 7.70 1.19 1.33 1.24 CROWN-IBP 0 0 5.22 5.63 6.34 1.19 1.05 1.03 1 0 5.43 5.90 6.02 1.30 1.03 1.09 1 0.5 5.44 5.89 6.09 1.11 1.16 1.01\n10 small models\nIBP 0 0 6.90 8.24 12.67 1.93 2.76 4.14 1 0 6.84 8.16 12.92 2.01 2.56 3.93 1 0.5 7.31 8.71 13.54 1.62 2.36 3.22 CROWN-IBP 0 0 6.11 7.29 11.97 1.93 2.3 3.86 1 0 6.27 7.66 12.11 2.01 2.92 4.06\ntrain = 0.2 1 0.5 6.53 8.14 12.56 1.61 1.61 3.27 test = 0.2\n8 medium models\nIBP 0 0 7.56 8.60 9.80 1.96 2.19 1.39 1 0 8.26 8.72 9.84 1.45 1.73 1.31 1 0.5 8.42 8.90 10.09 1.76 1.42 1.53 CROWN-IBP 0 0 6.06 6.42 7.64 1.09 1.33 1.36 1 0 6.39 7.09 7.84 1.11 1.04 1.25 1 0.5 6.63 7.51 7.96 1.08 1.25 1.19\n10 small models\nIBP 0 0 10.54 12.02 20.47 2.78 3.31 6.07 1 0 9.96 12.09 21.0 2.7 3.48 6.68 1 0.5 10.37 12.78 21.99 2.11 3.44 5.19 CROWN-IBP 0 0 8.87 11.29 16.83 2.43 3.62 7.26 1 0 9.69 11.33 15.23 2.78 3.41 5.90\ntrain = 0.3 1 0.5 9.90 11.98 19.56 2.20 2.72 4.83 test = 0.3\n8 medium models\nIBP 0 0 10.43 10.83 11.99 2.01 2.38 3.29 1 0 10.74 11.73 12.16 2.17 2.46 1.60 1 0.5 11.23 11.71 12.4 1.72 2.09 1.63 CROWN-IBP 0 0 7.46 8.47 8.57 1.48 1.52 1.99 1 0 7.96 8.53 8.99 1.45 1.56 1.85 1 0.5 8.19 9.20 9.51 1.27 1.46 1.62\n10 small models\nIBP 0 0 16.72 18.89 37.42 4.2 5.4 9.63 1 0 16.10 18.75 35.3 3.8 4.93 11.32 1 0.5 16.54 19.14 35.42 3.40 3.65 7.54 CROWN-IBP 0 0 15.38 18.57 24.56 3.61 4.83 8.46 1 0 16.22 18.20 24.80 4.23 5.15 8.54\ntrain = 0.4 1 0.5 15.97 19.18 24.76 3.48 3.97 6.64 test = 0.4\n8 medium models\nIBP 0 0 15.17 16.54 18.98 2.83 3.79 4.91 1 0 15.63 16.06 17.11 2.93 3.4 3.75 1 0.5 15.74 16.42 17.98 2.35 2.31 3.15 CROWN-IBP 0 0 12.96 13.43 14.25 2.76 2.85 3.36 1 0 12.90 13.47 14.06 2.42 2.86 3.11 1 0.5 13.02 13.69 14.52 1.89 2.40 2.35\nCIFAR-10\n9 small models\nIBP 0 0 54.69 57.84 60.58 40.59 45.51 51.38 1 0 54.56 58.42 60.69 40.32 47.42 50.73 1 0.5 56.89 60.66 63.58 34.28 39.28 48.03 CROWN-IBP 0 0 48.87 54.68 58.82 32.20 40.08 46.98 1 0 49.45 55.09 59.00 32.22 40.45 47.05\ntrain = 2/255 2 1 0.5 52.14 57.49 60.12 28.03 35.76 43.40 test = 2/255\n8 medium models\nIBP 0 0 55.47 56.41 58.54 41.59 44.33 46.54 1 0 55.51 56.74 57.85 42.41 43.71 44.74 1 0.5 57.05 59.70 60.25 34.77 35.80 38.95 CROWN-IBP 0 0 49.57 50.83 52.59 32.64 34.20 37.06 1 0 49.28 51.59 53.45 32.31 34.23 38.11 1 0.5 52.05 53.56 55.23 27.80 29.49 32.42\n9 small models\nIBP 0 0 72.07 73.34 73.88 61.11 61.01 64.0 1 0 72.42 72.57 73.49 62.26 60.98 63.5 1 0.5 73.88 75.16 76.93 55.66 52.53 53.79 CROWN-IBP 0 0 71.28 72.15 73.66 59.40 60.80 63.10 1 0 70.77 72.24 73.10 58.65 60.49 61.86\ntrain = 8/255 1 0.5 72.59 74.71 76.11 49.86 52.95 55.58 test = 8/255\n8 medium models\nIBP 0 0 72.75 73.23 73.82 59.23 65.96 66.35 1 0 72.18 72.83 74.38 62.54 59.6 61.99 1 0.5 74.84 75.59 97.93 51.71 54.41 54.12 CROWN-IBP 0 0 70.79 71.61 72.29 57.90 60.10 59.70 1 0 70.51 71.96 72.82 57.87 59.98 59.16 1 0.5 73.48 74.69 76.66 49.40 53.56 52.05\n1 Verified errors reported in Table 4 of Gowal et al. (2018) are evaluated using mixed integer programming (MIP) and linear programming (LP), which are strictly smaller than IBP verified errors but computationally expensive. For a fair comparison, we use the IBP verified errors reported in their Table 3. 3 We use βstart = βend = 1 for this setting, the same as in Table 2, and thus CROWN-IBP bound is used to evaluate the verified error.\n(a) small models, = 2/255, best 48.87% (b) small models, = 8/255, best 70.61%\nFigure C: Verified error vs. schedule length (30, 60, 90, 120) on 9 small models on CIFAR-10. The solid boxes show median values of verified errors. κstart = 1.0 except for the κ = 0 setting. The upper and lower bound of an error bar are worst and best verified error, respectively." }, { "heading": "G REPRODUCIBILITY", "text": "To further test the training stability of CROWN-IBP, we run each MNIST experiment (using selected models in Table B) 5 times to get the mean and standard deviation of the verified and standard errors on test set. Results are presented in Table E. Standard deviations of verified errors are very small, giving us further evidence of good stability and reproduciblity.\nerror model A model B model C model D model E model F model G model H model I model J\n0.1 std. err. (%) 2.57± .04 1.45± .05 3.02± .04 1.77± .04 2.13± .08 1.35± .05 2.03± .08 1.32± .08 1.77± .04 1.45± .05verified err. (%) 6.85± .04 4.88± .04 6.67± .1 5.10± .1 4.82± .2 4.18± .008 5.23± .2 4.59± .08 5.92± .09 5.40± .09 0.2 std. err. (%) 3.87± .04 2.43± .04 4.40± .2 2.32± .04 3.45± .3 1.90± 0 2.67± .1 2.00± .07 2.22± .04 1.65± .05verified err. (%) 12.0± .03 6.99± .04 10.3± .2 7.37± .06 9.01± .9 6.05± .03 7.50± .1 6.45± .06 7.50± .3 6.31± .08 0.3 std. err. (%) 5.97± .08 3.20± 0 6.78± .1 3.70± .1 3.85± .2 3.10± .1 4.20± .3 2.85± .05 3.67± .08 2.35± .09verified err. (%) 15.4± .08 10.6± .06 16.1± .3 11.3± .1 11.7± .2 9.96± .09 12.2± .6 9.90± .2 11.2± .09 9.21± .3 0.4 std. err. (%) 8.43± .04 4.93± .1 8.53± .2 5.83± .2 5.48± .2 4.65± .09 6.80± .2 4.28± .1 5.60± .1 3.60± .07verified err. (%) 24.6± .1 18.5± .2 24.6± .7 19.2± .2 18.8± .2 17.3± .04 20.4± .3 16.3± .2 18.5± .07 15.2± .3\nTable E: Means and standard deviations of verified and standard errors of 10 MNIST models trained using CROWN-IBP. The architectures of these models are presented in Table B. We run each model 5 times to compute its mean and standard deviation." }, { "heading": "H TRAINING STABILITY EXPERIMENTS ON OTHER", "text": "Similar to our experiments in Section 4, we compare the verified errors obtained by CROWN-IBP and IBP under different schedule lengths (10, 15, 30, 60) on MNIST and (30,60,90,120) on CIFAR-10. We present the best, worst and median verified errors over all 18 models for MNIST in Figure D, E at ∈ {0.1, 0.2, 0.3} and 9 small models for CIFAR-10 in Figure C. The upper and lower ends of an error bar are the worst and best verified error, respectively, and the solid boxes represent median values. CROWN-IBP can improve training stability, and consistently outperform IBP under different schedule length and κ settings." }, { "heading": "I OVERFITTING ISSUE WITH SMALL", "text": "We found that on MNIST for a small , the verified error obtained by IBP based methods are not as good as linear relaxation based methods (Wong et al., 2018; Mirman et al., 2018). Gowal et al. (2018) thus propose to train models using a larger and evaluate them under a smaller , for example train = 0.4 and eval = 0.3. Instead, we investigated this issue further and found that many CROWN-IBP trained models achieve very small verified errors (close to 0 and sometimes exactly 0) on training set (see Table F). This indicates possible overfitting during training. As we discussed in Section 3, linear relaxation based methods implicitly regularize the weight matrices so the network does not overfit when is small. Inspired by this finding, we want to see if adding an explicit `1\n(a) = 0.1, best 3.55% (b) = 0.2, best 4.98%\nFigure D: Verified error vs. schedule length (10, 15, 30, 60) on 8 medium MNIST models. The upper and lower ends of a vertical bar represent the worst and best verified error, respectively. The solid boxes represent the median values of the verified error. For a small , using a shorter schedule length improves verified error due to early stopping, which prevents overfitting. All best verified errors are achieved by CROWN-IBP regardless of schedule length.\n(a) = 0.1, best 3.84% (b) = 0.2, best 6.11% (c) = 0.3, best 8.87%\nFigure E: Verified error vs. schedule length (10, 15, 30, 60) on 10 small MNIST models. The upper and lower ends of a vertical bar represent the worst and best verified error, respectively. All best verified errors are achieved by CROWN-IBP regardless of schedule length.\nregularization term in CROWN-IBP training helps when train = 0.1 or 0.2. The verified and standard errors on the training and test sets with and without regularization can be found in Table F. We can see that with a small `1 regularization added (λ = 5× 10−5) we can reduce verified errors on test set significantly. This makes CROWN-IBP results comparable to the numbers reported in convex adversarial polytope (Wong et al., 2018); at = 0.1, the best model using convex adversarial polytope training can achieve 3.67% verified error, while CROWN-IBP achieves 3.60% best certified error on the models presented in Table F. The overfitting is likely caused by IBP’s strong learning power without over-regularization, which also explains why IBP based methods significantly outperform linear relaxation based methods at larger values. Using early stopping can also improve verified error on test set; see Figure D." }, { "heading": "J TRAINING TIME", "text": "In Table G we present the training time of CROWN-IBP, IBP and convex adversarial polytope (Wong et al., 2018) on several representative models. All experiments are measured on a single RTX 2080 Ti GPU with 11 GB RAM except for 2 DM-Large models where we use 4 RTX 2080 Ti GPUs to speed up training. We can observe that CROWN-IBP is practically 1.5 to 3.5 times slower than IBP. Theoretically, CROWN-IBP is up to nL = 10 times slower4 than IBP; however usually the total training time is less than 10 times since the CROWN-IBP bound is only computed during the ramp-up phase, and CROWN-IBP has higher GPU computation intensity and thus better GPU utilization than IBP. convex adversarial polytope (Wong et al., 2018), as a representative linear relaxation based\n4More precisely, nL − 1 = 9 times slower as we can omit the all-zero row in specification matrix Eq. (3).\nModel Name λ: `1 regularization Training Test (see Appendix D) standard error verified error standard error verified error\n0.1\nP 0 0.01% 0.01% 1.05% 5.63% P 5× 10−5 0.32% 0.98% 1.30% 3.60% O 0 0.02% 0.05% 0.82% 6.02% O 5× 10−5 0.38% 1.34% 1.43% 4.02%\n0.2\nP 0 0.35% 1.40% 1.09% 6.06% P 5× 10−5 1.02% 3.73% 1.48% 5.48% O 0 0.31% 1.54% 1.22% 6.64% O 5× 10−5 1.09% 4.08% 1.69% 5.72%\nTable F: `1 regularized and unregularized models’ standard and verified errors on training and test set. At a small , CROWN-IBP may overfit and adding regularization helps robust generalization; on the other hand, convex relaxation based methods (Wong et al., 2018) provides implicitly regularization which helps generalization under small but deteriorate model performance at larger .\nmethod, can be over hundreds times slower than IBP especially on deeper networks. Note that we use 50 random Cauchy projections for (Wong et al., 2018). Using random projections alone is not sufficient to scale purely linear relaxation based methods to larger datasets, thus we advocate a combination of IBP bounds with linear relaxation based methods as in CROWN-IBP, which offers good scalability and stability. We also note that the random projection based acceleration can also be applied to the backward bound propagation (CROWN-style bound) in CROWN-IBP to further speed up CROWN-IBP.\nTable G: IBP and CROWN-IBP’s training time on different models in seconds. For IBP and CROWNIBP, we use a batchsize of 256 for MNIST and 128 for CIFAR-10. For convex adversarial polytope, we use 50 random Cauchy projections, and reduce batch size if necessary to fit into GPU memory.\nData MNIST CIFAR-10\nModel Name A C G L O DM-large( train = 0.4) B D H S M DM-large\nIBP (s) 245 264 290 364 1032 37691 734 908 1048 691 1407 404961 CROWN-IBP (s) 371 564 590 954 3649 55841 1148 1853 1859 1491 4137 912881\nCAP (Wong et al., 2018)2(s) 1708 9263 12649 35518 160794 —- 2372 12688 18691 6961 51145 —-\n1 We use 4 GPUs to train this model. 2 Convex adversarial polytopes (CAP) are computed with 50 random projections. Without random projections it will not scale to most models except for the smallest ones." }, { "heading": "K REPRODUCING CIFAR-10 RESULTS ON MULTI-GPUS", "text": "The use of 32 TPUs for our CIFAR-10 experiments is not necessary. We use TPUs mainly for obtaining a completely fair comparison to IBP (Gowal et al., 2018), as their implementation was TPU-based. Since TPUs are not widely available, we additionally implemented CROWN-IBP using multi-GPUs. We train the best models in Table 2 on 4 RTX 2080Ti GPUs. As shown in Table H, we can achieve comparable verified errors using GPUs, and the differences between GPU and TPU training are around ±0.5%. Training time is reported in Table G." }, { "heading": "L EXACT FORMS OF THE CROWN-IBP BACKWARD BOUND", "text": "CROWN (Zhang et al., 2018) is a general framework that replaces non-linear functions in a neural network with linear upper and lower hyperplanes with respect to pre-activation variables, such that the entire neural network function can be bounded by a linear upper hyperplane and linear lower hyperplane for all x ∈ S (S is typically a norm bounded ball, or a box region):\nAx+ b ≤ f(x) ≤ Ax+ b\nCROWN achieves such linear bounds by replacing non-linear functions with linear bounds, and utilizing the fact that the linear combinations of linear bounds are still linear, thus these linear bounds\nTable H: Comparison of verified and standard errors for CROWN-IBP models trained on TPUs and GPUs (CIFAR-10, DM-Large model).\nDataset (`∞ norm) Training Device κ schedules Model errors (%) κstart κend Standard Verified\nCIFAR-10\ntest = 2 255 1 GPU 0 0 29.18 45.50 train = 2.2 255 TPU 0 0 28.48 46.03\ntest = 8 255 GPU 0 0 54.60 67.11 train = 8.8 255 TPU 0 0 54.02 66.94\n1 We use βstart = βend = 1 for this setting, the same as in Table 2, and thus CROWN-IBP\nbound is used to evaluate the verified error.\ncan propagate through layers. Suppose we have a non-linear vector function σ, applying to an input (pre-activation) vector z, CROWN requires the following bounds in a general form:\nAσz + bσ ≤ σ(z) ≤ Aσz + bσ\nIn general the specific bounds Aσ,bσ,Aσ,bσ for different σ needs to be given in a case-by-case basis, depending on the characteristics of σ and the preactivation range z ≤ z ≤ z. In neural network common σ can be ReLU, tanh, sigmoid, maxpool, etc. Convex adversarial polytope (Wong et al., 2018) is also a linear relaxation based techniques that is closely related to CROWN, but only for ReLU layers. For ReLU such bounds are simple, where Aσ,Aσ are diagonal matrices, bσ = 0:\nDz ≤ σ(z) ≤ Dz + c (14)\nwhere D and D are two diagonal matrices:\nDk,k = 1, if zk > 0, i.e., this neuron is always active 0, if zk < 0, i.e., this neuron is always inactive α, otherwise, any 0 ≤ α ≤ 1\n(15)\nDk,k = 1, if zk > 0, i.e., this neuron is always active 0, if zk < 0, i.e., this neuron is always inactive zk\nzk−zk , otherwise\n(16)\nck = 0, if zk > 0, i.e., this neuron is always active 0, if zk < 0, i.e., this neuron is always inactive zkzk zk−zk , otherwise (17)\nNote that CROWN-style bounds require to know all pre-activation bounds z(l) and z(l). We assume these bounds are valid for x ∈ S. In CROWN-IBP, these bounds are obtained by interval bound propagation (IBP). With pre-activation bounds z(l) and z(l) given (for x ∈ S), we rewrite the CROWN lower bound for the special case of ReLU neurons:\nTheorem L.1 (CROWN Lower Bound). For a L-layer neural network function f(x) : Rn0 → RnL , ∀j ∈ [nL], ∀x ∈ S, we have fj(x) ≤ fj(x), where\nfj(x) = A (0) j,: x + L∑ l=1 A (l) j,: (b (l) + bj,(l)), (18)\nA (l) j,: = { e>j if l = L; A\n(l+1) j,: W (l+1)Dj,(l) if l ∈ {0, · · · , L− 1}.\nand ∀i ∈ [nk], we define diagonal matrices Dj,(l), bias vector b(l):\nDj,(0) = I, bj,(L) = 0\nD j,(l) k,k = 1 if A(l+1)j,: W (l+1) :,i ≥ 0, z (l) k > |z (l) k |, l ∈ {1, · · · , L− 1}; 0 if A(l+1)j,: W (l+1) :,i ≥ 0, z (l) k < |z (l) k |, l ∈ {1, · · · , L− 1}; z (l) k\nz (l) k −z (l) k\nif A(k+1)j,: W (k+1) :,i < 0, l ∈ {1, · · · , L− 1}.\nb j,(l) k = 0 if A (l+1) j,: W (l+1) :,i ≥ 0; l ∈ {1, · · · , L− 1} z (l) k z (l) k\nz (l) k −z (l) k\nif A(l+1)j,: W (l+1) :,i < 0 l ∈ {1, · · · , L− 1}.\nej ∈ RnL is a standard unit vector with j-th coordinate set to 1.\nNote that unlike the ordinary CROWN (Zhang et al., 2018), in CROWN-IBP we only need the lower bound to compute m and do not need to compute the A matrices for the upper bound. This save half of the computation cost in ordinary CROWN. Also, W represents any affine layers in a neural network, including convolutional layers in CNNs. In Section 3.2, we discussed how to use transposed convolution operators to efficiently implement CROWN-IBP on GPUs.\nAlthough in this paper we focus on the common case of ReLU activation function, other general activation functions (sigmoid, max-pooling, etc) can be used in the network as CROWN is a general framework to deal with non-linearity. For a more general derivation we refer the readers to (Zhang et al., 2018) and (Salman et al., 2019b)." } ]
2,019
VERIFIABLY ROBUST NEURAL NETWORKS
SP:687a3382a219565eb3eb85b707017eb582439565
[ "Paper summary: This paper argues that reducing the reliance of neural networks on high-frequency components of images could help robustness against adversarial examples. To attain this goal, the authors propose a new regularization scheme that encourages convolutional kernels to be smoother. The authors augment standard loss functions with the proposed regularization scheme and study the effect on adversarial robustness, as well as perceptual-alignment of model gradients.", "The authors propose a method for learning smoother convolutional kernels with the goal of improving robustness and human alignment. Specifically, they propose a regularizer penalizing large changes between consecutive pixels of the kernel with the intuition of penalizing the use of high-frequency input components. They evaluate the impact of their method on the adversarial robustness of various models and class visualization methods." ]
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients.
[]
[ { "authors": [ "Naveed Akhtar", "Ajmal Mian" ], "title": "Threat of adversarial attacks on deep learning in computer vision: A survey", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Rima Alaifari", "Giovanni S. Alberti", "Tandri Gauksson" ], "title": "ADef: an iterative algorithm to construct adversarial deformations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Bhuvanesh Awasthi", "Jason Friedman", "Mark A Williams" ], "title": "Faster, stronger, lateralized: low spatial frequency information supports face processing. Neuropsychologia", "venue": null, "year": 2011 }, { "authors": [ "Moshe Bar" ], "title": "Visual objects in context", "venue": "Nature Reviews Neuroscience,", "year": 2004 }, { "authors": [ "Ronald Newbold Bracewell" ], "title": "The Fourier transform and its applications, volume 31999", "venue": "McGrawHill New York,", "year": 1986 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Fabio Maria Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles, 2019", "venue": null, "year": 2019 }, { "authors": [ "Anirban Chakraborty", "Manaar Alam", "Vishal Dey", "Anupam Chattopadhyay", "Debdeep Mukhopadhyay" ], "title": "Adversarial attacks and defences: A survey, 2018", "venue": null, "year": 2018 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jeremy M. Cohen", "Elan Rosenfeld", "J. Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Imre Csiszar", "János Körner" ], "title": "Information theory: coding theorems for discrete memoryless systems", "venue": null, "year": 2011 }, { "authors": [ "Ilias Diakonikolas", "Gautam Kamath", "Daniel Kane", "Jerry Li", "Ankur Moitra", "Alistair Stewart" ], "title": "Robust estimators in high-dimensions without the computational intractability", "venue": "SIAM Journal on Computing,", "year": 2019 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry" ], "title": "Learning perceptually-aligned representations via adversarial robustness", "venue": null, "year": 1906 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Visualizing higher-layer features of a deep network", "venue": "University of Montreal,", "year": 2009 }, { "authors": [ "Alhussein Fawzi", "Hamza Fawzi", "Omar Fawzi" ], "title": "Adversarial vulnerability for any classifier", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Daniel Jakubovitz", "Raja Giryes" ], "title": "Improving dnn robustness to adversarial attacks using jacobian regularization", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing", "venue": "arXiv preprint arXiv:1803.06373,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin ichi Maeda", "Masanori Koyama", "Ken Nakae", "Shin Ishii" ], "title": "Distributional smoothing with virtual adversarial training", "venue": null, "year": 2015 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sergei Sergeevich Platonov" ], "title": "The fourier transform of functions satisfying the lipschitz condition on rank 1 symmetric spaces", "venue": "Siberian Mathematical Journal,", "year": 2005 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models, 2017", "venue": null, "year": 2017 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Computer vision with a single (robust) classifier", "venue": null, "year": 1906 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "C Edward" ], "title": "Titchmarsh. Introduction to the theory of fourier integrals", "venue": null, "year": 1948 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Eric P Xing", "Zachary C Lipton" ], "title": "Learning robust global representations by penalizing local predictive power", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Haohan Wang", "Zexue He", "Eric P. Xing" ], "title": "Learning robust representations by projecting superficial statistics out", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Haohan Wang", "Xindi Wu", "Pengcheng Yin", "Eric P. Xing" ], "title": "High frequency component helps explain the generalization of convolutional neural networks", "venue": "CoRR, abs/1905.13545,", "year": 2019 }, { "authors": [ "Tobias Weyand", "Ilya Kostrikov", "James Philbin" ], "title": "Planet - photo geolocation with convolutional neural networks", "venue": "Lecture Notes in Computer Science,", "year": 2016 }, { "authors": [ "Eric Wong", "J. Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Zhao Song", "Duane Boning", "inderjit dhillon", "Cho-Jui Hsieh" ], "title": "The limitations of adversarial training and the blind-spot attack", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": null, "text": "Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients." }, { "heading": "1 INTRODUCTION", "text": "In recent years, deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision, leading some to liken their abilities on perception tasks to those of humans (e.g., Weyand et al., 2016). However, under closer inspection, the limits of such claims to the narrow scope of i.i.d. data become clear. For example, when faced with adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) or even in non-adversarial domain-agnostic cross-domain evaluations (Wang et al., 2019a;b; Carlucci et al., 2019), performance collapses, dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild.\nA long line of recent research has investigated the robustness of neural networks, including investigations of the high-dimension nature of models (Fawzi et al., 2018), enlarging the gaps between decision boundaries (Zhang et al., 2019a), training the models with augmented examples through attack methods (Madry et al., 2018), and even guaranteeing the robustness of models within given radii of perturbation (Wong & Kolter, 2018; Cohen et al., 2019). Compared to earlier methods, these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks. However, despite the success of these techniques, vulnerabilities to new varieties of attacks are frequently discovered (Zhang et al., 2019b).\nIn this paper, we aim to lessen the dependency of neural networks on high-frequency patterns in images, regularizing CNNs to focus on the low-frequency components. Therefore, the main argument of this paper is that: by regularizing the CNN to be most sensitive to the low-frequency components of an image, we can improve the robustness of models. Interestingly, this also appears to lead to more perceptually-aligned gradients. Further, as Wang et al. (2019c) explicitly defined the low (or high)-frequency components as images reconstructed from the low (or high)-end of the image frequency domain (as is frequently discussed in neuroscience literature addressing human recognition of shape (Bar, 2004) or face (Awasthi et al., 2011)), we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models’ robustness.\nWe test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients." }, { "heading": "2 RELATED WORK", "text": "Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models (Szegedy et al., 2013)). The discovery of adversarial examples spurred a torrent of research, much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks. We refer to survey papers such as (Akhtar & Mian, 2018; Chakraborty et al., 2018) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness, such as regularizations constraining the Lipschitz constant of the network (Cisse et al., 2017) (Lipschitz smoothness), regularizing the scale of gradients (Ross & Doshi-Velez, 2018; Jakubovitz & Giryes, 2018) (smooth gradients), regularizing the curvature of the loss surface (Moosavi-Dezfooli et al., 2019) (smooth loss curvature), and promoting the smoothness of the model distribution (Miyato et al., 2015). These regularizations also use the concept of “smoothness,” but different from ours (small differences among the adjacent weights).\nRecently, adversarial training (Goodfellow et al., 2015; Madry et al., 2018) has become one of the most popular defense methods, based on the simple idea of augmenting the training data with samples generated through attack methods (i.e., threat models). While adversarial training excels across many evaluations, recent evidence exposes its new limitations (Zhang et al., 2019b), suggesting that adversarial robustness remains a challenge.\nKey differences: In this paper, we present a new technique penalizing differences among the adjacent components of convolutional kernels. Moreover, we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients." }, { "heading": "3 SMOOTH KERNEL REGULARIZATION", "text": "Intuition. High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform. This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes (Bar, 2004) and faces (Awasthi et al., 2011). Therefore, we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images. We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors, i.e., (wi,j −wh,k∈N(i,j))2 is a small number, where w denotes the convolutional kernel weight, i, j denote the indices of the convolutional kernel w, and N(i, j) denotes the set of the spatial neighbors of i, j.\nWe note two points that support our intuition.\n1. The frequency domain of a smooth kernel has only negligible high-frequency components. This argument can be shown with Theorem 1 in (Platonov, 2005). Roughly, the idea is to view the weight matrix w as a function that maps the index of weights to the weights: w(i, j) → wi,j , then a smooth kernel can be seen as a Lipschitz function with constant α. As pointed out by Platonov (2005), Titchmarsh (1948) showed that when 0 < α < 1, in the frequency domain, the sum of all the high frequency components with a radius greater than r will converge to a small number, suggesting that the high-frequency components (when r is large) are negligible.\n2. The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly. This argument can be shown through Convolution Theorem (Bracewell, 1986), which states w~x = F−1(F(w) F(x)), where F(·) stands for Fourier transform, ~ stands for convolution operation, and stands for point-wise multiplication. As the theorem states, the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain. Therefore, roughly, if w has negligible high-frequency components in the frequency domain, it will weigh the high-frequency components of x accordingly with negligible weights. Naturally, this argument only pertains to a single convolution, and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit.\nFormally, we calculate our regularization term R0(w) as follows: R0(w) = ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2,\nWe also aim to improve this regularization by trying a few additional heuristics:\n• First, we notice that directly appending R0(w) will sometimes lead to models that achieve the a small value of R0(w) by directly scaling down the every coefficient of w proportionally, without changing the fluctuation pattern of the weights. To fix this problem, we directly subtract the scale of w (i.e., ∑ i,j w 2 i,j) after R0(w).\n• Another heuristic to fix this same problem is to directly divide R0(w) by the scale of w. Empirically, we do not observe significant differences between these two heuristics. We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator.\n• Finally, we empirically observe that the regularization above will play a significant role during the early stage of training, but may damage the overall performances later when the regularization pulls towards smoothness too much. To mitigate this problem, we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small.\nOverall, our final regularization is:\nR(w) = exp ∑ i,j ∑ h,k∈N(i,j) (wi,j −wh,k)2 − ∑ i,j w2i,j In practice, the convolutional kernel is usually a 4-dimensional tensor, while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images. Thus, we only regularize through these two dimensions broadcasting the operation through the channels.\nBecause a repeated calculation of each kernel component’s distance with its neighbors will double count some pairs, our implementation instead enumerates over all pairs of neighbors, counting each squared difference only once towards the total penalty.\nWe can directly append the regularization λR(w) to most loss functions, where λ is a tuning hyperparameter. In the following experiments, we append λR(w) to the vanilla loss function (crossentropy loss), Trades loss (Zhang et al., 2019a), adversarial training loss (Madry et al., 2018), and a variation of logit pairing loss (Kannan et al., 2018), as introduced in the following paragraphs.\nAdversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model. Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples, a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples, which is related to logit pairing. However, to make the comparison consistent, we use a variation of logit pairing, penalizing the KL divergence of softmax (rather than `2 distance over logits), following the Trades loss, which also uses KL divergence over softmax as the distance metric.\nTo be specific, with the standard notations such as 〈X,Y〉 denoting a data set, 〈x,y〉 denoting a sample, the logit pairing loss is formalized as:\nminE〈x,y〉∼〈X,Y〉l(f(x′; θ);y) + γk(fl(x′; θ), fl(x; θ)) where x′ = argmax\nd(x′,x)≤ l(f(x′; θ);y)\nwhere d(·, ·) and k(·, ·) are distance functions, fl(·; ·) denotes the model f(·; ·) but outputs the softmax instead of a prediction, l(·, ·) is a cost function, γ is a tuning hyperparameter, and is the upper bound of perturbation. In our following experiments, we consider d(·, ·) as `∞ norm following popular adversarial training set-up and k(·, ·) as KL divergence following standard Trades loss.\nIntuitively, our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker’s inequality suggests that KL divergence upper-bounds the total variation (TV) distance (e.g., Csiszar & Körner, 2011), the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples, which is linked to the robustness of an estimator, a topic that has been studied by the statistics community for over decades (e.g., see (Diakonikolas et al., 2019) and references within)." }, { "heading": "4 EXPERIMENTS", "text": "To empirically validate our methods, we first consider a simple synthetic experiment to demonstrate the effectiveness of our proposed solutions. Then, with standard data sets such as MNIST (LeCun, 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky & Hinton, 2009) and Restricted ImageNet (Tsipras et al., 2019), we evaluate our methods with well-established criteria, such as `∞ bounded accuracy. We also leverage saliency-based visualization methods to understand how the model understands each class. Most experiments are conducted with a simple basic convolutional neural network with two convolution layers and two fully connected layers, while the CIFAR10 experiment is conducted with ResNet18 and Restricted ImageNet experiment is conducted with ResNet50 (more details of the models are in the Appendix). As we mentioned previously, we apply the new regularization to four different losses: the vanilla loss (denoted as V), Trades loss (denoted as T) (Zhang et al., 2019a), adversarial training (denoted as A) (Madry et al., 2018), and our variation of logit pairing (denoted as L). T, A, L all adopt `∞ norm bounded PGD as the threat model. We use VR, TR, AR, LR to denote the methods after our regularization is plugged in. We evaluate our methods against a wide range of adversarial attack methods, including FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), C&W (Carlini & Wagner, 2017), DeepFool (both `2 and `∞) (Moosavi-Dezfooli et al., 2016), ADef, a method that iteratively applies small deformations to the image (Alaifari et al., 2019) and Salt&Pepper, a black-box method that adds noise to the image. For these attack methods, we use the default parameters in Foolbox (Rauber et al., 2017), and our experiments suggest that these default parameters are effective enough in most cases. For every data set, we first tune the `∞ norm perturbation bound of adversarial training method and then use the same setting for Trades loss and variation of logit pairing. We tune γ within {0.1, 1.0, 10.0} and tune λ within {0.01, 0.1, 1.0, 10.0, 100.0}." }, { "heading": "4.1 SYNTHETIC EXPERIMENTS FOR SANITY CHECKING", "text": "We first use a basic data set of four shapes1 to test whether our proposed method helps regularize the model to behave as we desire. Each image in this data set has a white background and a black foreground depicting one of the four shapes: circle, star, square, and triangle. Our goal is to train a convolutional neural network to classify the images into one of these four shapes.\nWe compare the models trained the four basic losses V, T, A, L and these models with our regularization, denoted as VR, TR, AR, and LR, when λ = 100.0. To further test our idea, we also test the regularization with the hyperparameter set to a negative value λ = −100.0 to inspect the consequences when we regularize the model towards high-frequency kernels. Resulting models are denoted as VH, TH, AH, LH respectively according to the basic losses.\nWe report our inspections in Figure 1: Figure 1(a) visualizes of the convolution kernel (due to the limitation of space, we only visualize the first four convolutional kernels); Figure 1(b) visualizes the corresponding frequency domain in absolute values; Figure 1(c) visualizes the internal representation after an image depicting star is passed through the kernels.\nFigure 1 (a) shows that our regularization guides the model towards a smooth kernel, across all the basic losses. Also, if we apply our regularization with a negative parameter, then the weights of the resulting kernel tend to fluctuate more dramatically. Figure 1 (b) validates our argument that a smooth kernel only has negligible high-frequency components. As we can see, the frequency domain corresponding to the kernels when our regularization is applied shows significant differences in low-frequency components (center of the visualization) and high-frequency components (periphery of the visualization). Figure 1 (c) further validates our intuition, showing that in comparison to\n1https://www.kaggle.com/smeschke/four-shapes\ninternal representations summarized by kernels from basic losses, those influenced by our regularization are more sensitive to the low-frequency signal (e.g. the shape of the input), and the internal representation with our regularization when the parameter is negative tends to focus more on the high-frequency signals.\nFurther, we check the mechanism of our model by inspecting how adversarial examples deceive the models. Figure 2 shows the four on-average most deceptive adversarial examples (the models predict incorrectly with the highest confidence) generated by FGSM. Notations follow the same convention as the previous case, and O denotes the original image.\nWhile many images have to be perturbed with a human perceivable amount to deceive the model, we can notice that the adversarial examples for models with our regularization (?R) tend to behave in way that can be understood by a human. The most convincing examples are at the first row for A and AR, where we can clearly see that the adversarial examples alter the decisions from star to circle. Other adversarial examples for ?R models also introduce large areas that shall be interpreted as the shape. In contrast, adversarial examples for other models tend to introduce scattered patches, which\nwill probably not be considered as the shape for most people. Also, if we apply our regularization with a negative parameter (?H), the patches tend to behave in a more shattered manner." }, { "heading": "4.2 STANDARD NUMERICAL EVALUATION", "text": "In Table 1, we report the prediction accuracy over the generated adversarial examples across the attack methods. For MNIST and FashionMNIST, we do not limit the of adversarial examples. In principle, when there is no constraint, one should always be able to find the adversarial example for any sample, however, in practice, many search attempts fail when the attack methods are set with the default hyperparameters in Foolbox. We consider these failures of searches (under default parameters) also a measure of the robustness of models. For CIFAR10 and Restricted ImageNet, we set the to be 0.1 and 0.05, respectively (the maximum pixel value is 1.0).\nOverall, across most of the settings, our regularization helps achieve numerically the best adversarially robust models. Impressively, for MNIST and FashionMNIST, for some attack methods (e.g., both versions of DeepFool), our regularization can improve the robustness significantly even when only applied to the vanilla training loss, suggesting the importance of the smooth regularization. Also, for these two datasets, the improvements of our regularization are mostly significant over the non-regularized counterparts. For CIFAR10 and Restricted ImageNet, the performance gains are less significant but still observable. In the Appendix, we report the accuracy and curves over `0,\nDigit 0: Digit 1:\nDigit 2: Digit 3:\nDigit 4: Digit 5:\nDigit 6: Digit 7:\nDigit 8: I V VR T TR A AR L LR Digit 9: I V VR T TR A AR L LR\nFigure 3: Sample-independent interpretation of models trained over MNIST. I stands for the input.\nT-shirt: Coat:\nPullover: Dress:\nTrouser: Shirt:\nSandal: Bag:\nSneaker: I V VR T TR A AR L LR Boot: I V VR T TR A AR L LR\nFigure 4: Sample-independent interpretation of models for FashionMNIST. I stands for the input.\n`2, and `∞ distances, for more thorough comparisons. In general, the performances evaluated by the curves are consistent with results in Table 1." }, { "heading": "4.3 INSPECTING MODEL’S PERCEPTION OF CLASS", "text": "We also leverage one of the most classic model-interpretation methods, activation maximization (Erhan et al., 2009), to further demonstrate the strength of our regularization. Concretely, we follow (Simonyan et al., 2013; Engstrom et al., 2019) to maximize the logit of a certain class so that the most representative features of that class can be exaggerated in the input image. Specifically, with an input image of Gaussian noise, we apply projected gradient descent 10000 iterations with learning rate 0.001 to update the input image. Notice that the interpretation is sample-independent.\nFigure 3 depicts what the models consider to be the digits. While V and T can barely be interpreted to human, when our regularization is plugged in, the patterns appear to be observable, with impressive examples such as Digit 0, 2, 3. A can also deliver interpretable decisions (e.g., Digit 3 and 5), and our regularization significantly helps in other cases, such as Digit 0, 1, 2, 4, and 8. Figure 4 shows a similar story for FashionMNIST dataset: while A might have the cleanest interpretation for the “sneaker” case, our regularization (especially AR) probably has the best interpretation in all other cases, with good examples such as “Trouser,” “Dress,” and “Boot.” Interestingly, AR is the only method that interprets “Bag” with a strap, and the average image of all training “Bag” samples in FashionMNIST is a bag with a strap. Figure 5 shows the visualization of models trained on CIFAR10. While A seems to have the best interpretation in “horse” case, AR and LR have equal or better interpretation in comparison with A in other cases. Impressively, only AR and LR understand “bird,” and only AR understands “deer”. Figure 6 shows the visualization for Restricted ImageNet\nplane: mobile:\nbird: cat:\ndeer: dog:\nfrog: horse:\nship: I V VR T TR A AR L LR truck: I V VR T TR A AR L LR\n(results of simpler models are not shown because they cannot be interpreted). AR is the only method that can describe the outline of the “bird” and “crab” classes, while these models seem to remain more or less the similar interpretation power for other labels.\nOther results, such as visualization of targeted attack through saliency-based methods and selective visualization of adversarial examples generated along with the experiments, are shown in the Appendix. Overall, the empirical evidence supports our intuition in Section 3: the regularization helps push the model to focus on the low-frequency components of the image and thus leads to more perceptually-aligned gradients." }, { "heading": "5 CONCLUSION", "text": "Inspired by neuroscience literature emphasizing the connection between low-frequency components and shape recognition (Bar, 2004; Awasthi et al., 2011), we proposed a smooth kernel regularization that forces the CNN to learn smooth convolutional kernels (kernels with small differences among adjacent weights) during training. As the relation between smoothness and low-frequency can be argued intuitively and supported by some known results in proved theorems (Titchmarsh, 1948; Bracewell, 1986; Platonov, 2005), our regularization should help the model to depend more on the low-frequency components of images. To verify the effectiveness of the regularization, we plug in the idea onto multiple training losses, including the vanilla loss, Trades loss (Zhang et al., 2019a), the adversarial training loss (Madry et al., 2018), as well as a variation of Logit Pairing loss (Kannan et al., 2018). With seven different attack methods, we demonstrate the empirical strength of our regularization with standard numerical evaluations. Further, we also leverage the standard model interpretation methods to explain the decision of models, showing that our technique, like those demonstrated by Santurkar et al. (2019), tends to result in more perceptually-aligned gradients." }, { "heading": "A MODEL AND HYPERPARAMETER CHOICES", "text": "For MNIST and FashionMNIST data set, the model is a simple two-layer architecture with two convolutional layers and two fully connected layers. The `∞ perturbation bound of PGD is set to 0.3/1.0 for MNIST and 0.1/1.0 for FashionMNIST. For CIFAR10, the model is a ResNet18, and the `∞ perturbation bound of PGD is set to 0.03/1.0 (roughly 8/255). For Restricted ImageNet, the model is a ResNet50, and the `∞ perturbation bound of PGD is set to 0.005/1.0, then along the processing process, the pixel values of the images are divided by the standard deviation (0.2575), so is the perturbation bound. Also, for Restricted ImageNet, we continue with either the standard ImageNet-pretrained ResNet50 (for V and T losses) or the adversarially trained ResNet50 on Restricted ImageNet (Santurkar et al., 2019) (for A and L losses). With our hardware settings (NVIDIA 1080Ti), we cannot effectively train Trades loss over ResNet50." }, { "heading": "B ACCURACY-EPSILON CURVES", "text": "The accuracy-epsilon curves are shown for `0, `2 and `∞ bounds are shown in Figure 7, Figure 8, and Figure 9." }, { "heading": "C TARGETED ATTACK", "text": "We also take advantage of the gradient to perform targeted attack, as shown in following figures. The titles of the columns describe the original class, and the titles of the rows describe the target classes." }, { "heading": "D SELECTIVE ADVERSARIAL EXAMPLES", "text": "We visualize the generated adversarial examples to help us evaluate the models. We visualize the on-average most deceptive examples (the highest prediction confidence on the wrong class). We plot one example for each class of the data. For MNIST and FashionMNIST, we focus on the visualization of adversarial examples generated by Adef attack because the attack is more visually aligned with how human perceive the images." } ]
2,019
SMOOTH KERNELS IMPROVE ADVERSARIAL ROBUST-
SP:b9b8e3efa69342c90b91dcb29bda1e2f8127581e
[ "This paper proposes a neural topic model that aim to discover topics by minimizing a version of the PLSA loss. According to PLSA, a document is presented as a mixture of topics, while a topic is a probability distribution over words, with documents and words assumed independent given topics. Thanks to this assumption, each of these probability distributions (word|topic, topic|document, and word|document) can essentially be expressed as a matrix multiplication of the other two, and EM is usually adopted for the optimization. This paper proposes to embed these relationships in a neural network and then optimize the model using SGD.", "I am unimpressed with the quality of writing and presentation, to begin with. There are numerous grammatical errors and typos that make the paper a very difficult read. The presentation also follows an inequitable pattern where the backgrounds and related works are overemphasized and the actual contribution of the paper seems very limited. In its current form, this paper is not ready for publication in ICLR." ]
In this paper we present a model for unsupervised topic discovery in texts corpora. The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents. These probabilities are used to recover by marginalization probabilities of words given documents. For very large corpora where the number of documents can be in the order of billions, using a neural auto-encoder based document embedding is more scalable then using a lookup table embedding as classically done. We thus extended the lookup based document embedding model to continuous auto-encoder based model. Our models are trained using probabilistic latent semantic analysis (PLSA) assumptions. We evaluated our models on six datasets with a rich variety of contents. Conducted experiments demonstrate that the proposed neural topic models are very effective in capturing relevant topics. Furthermore, considering perplexity metric, conducted evaluation benchmarks show that our topic models outperform latent Dirichlet allocation (LDA) model which is classically used to address topic discovery tasks.
[]
[ { "authors": [ "D.M. Blei", "J.D. Lafferty" ], "title": "Dynamic topic models", "venue": "International Conference on Machine Learning (ICML), pp", "year": 2006 }, { "authors": [ "D.M. Blei", "A.Y. Ng", "M.I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of Machine Learning Research (JMLR), pp", "year": 2003 }, { "authors": [ "D.M. Blei", "J.D. McAuliffe" ], "title": "Supervised topic models", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2008 }, { "authors": [ "D.M. Blei", "T.L. Griffiths", "M.I. Jordan", "J. Tenenbaum" ], "title": "Hierarchical topic models and the nested chinese restaurant process", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2004 }, { "authors": [ "K. Bollacker", "C. Evans", "P. Paritosh", "T. Sturge", "J. Taylor" ], "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "venue": "ACM SIGMOD international conference on Management of data,", "year": 2008 }, { "authors": [ "J. Boyd-Grabber", "D.M. Blei" ], "title": "Multilingual topic models for unaligned text", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2009 }, { "authors": [ "E. Cambria", "B. White" ], "title": "Jumping nlp curves: A review of natural language processing research", "venue": "IEEE Computational Intelligence Magazine,", "year": 2014 }, { "authors": [ "Z. Cao", "S Li", "Y. Liu", "W. Li", "H. Ji" ], "title": "A novel neural topic model and its supervised extension", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "M. Carreira-Perpiñán", "G.E. Hinton" ], "title": "On contrastive divergence learning", "venue": "International Conference on Artificial Intelligence and Statistics(AISTATS),", "year": 2005 }, { "authors": [ "D. Cohn", "T. Hofmann" ], "title": "The missing link-a probabilistic model of document content and hypertext connectivity", "venue": "Advances in Neural Information Processing Systems,", "year": 2001 }, { "authors": [ "R. Collobert", "J. Weston", "L. Bottou", "M. Karlen", "K. Kavukcuoglu", "P. Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "A. Daud", "J. Li", "L. Zhou", "F. Muhammad" ], "title": "Knowledge discovery through directed probabilistic topic models: a survey", "venue": "Frontiers of computer science in China,", "year": 2010 }, { "authors": [ "A.B. Dieng", "C. Wang", "J. gao", "J. Paisley" ], "title": "Topicrnn: A reccurent neural network with lon range semantic dependency", "venue": "International Conference on Learning Representation,", "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "S. Dumais" ], "title": "Indexing by latent semantic analysis", "venue": "Journal of the American Society for Information Science,", "year": 1990 }, { "authors": [ "K. Farrahi", "D. Gatica-Perez" ], "title": "Probabilistic mining of socio-geographic routines from mobile phone data", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2010 }, { "authors": [ "K. Farrahi", "D. Gatica-Perez" ], "title": "Discovering routines from large-scale human locations using probabilistic topic models", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2011 }, { "authors": [ "K. Farrahi", "D. Gatica-Perez" ], "title": "A probabilistic approach to mining mobile phone data sequences", "venue": "Personal and ubiquitous computing,", "year": 2014 }, { "authors": [ "M.D. Hoffman", "D.M. Blei", "F. Bach" ], "title": "Online learning for latent dirichlet allocation", "venue": "NIPS Proceedings of the 23rd International Conference on Neural Information Processing Systems Volume", "year": 2010 }, { "authors": [ "Thomas Hofmann" ], "title": "Unsupervised learning by probabilistic latent semantic analysis", "venue": "Machine learning,", "year": 2001 }, { "authors": [ "D.P. Kingma", "Ma. Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "H. Larochelle", "S. Lauly" ], "title": "Neural autoregressive topic model", "venue": "Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "H. Larochelle", "I. Murray" ], "title": "Neural autoregressive distribution estimator", "venue": "International Conference on Artificial Intelligence and Statistics(AISTATS), pp", "year": 2011 }, { "authors": [ "B. Liu", "L. Zhang" ], "title": "A survey of opinion mining and sentiment analysis", "venue": "Mining text data,", "year": 2012 }, { "authors": [ "J. Liu", "W.-C. Chang", "Y. Wu", "Y. Yang" ], "title": "Deep learning for extreme multi-label text classification", "venue": "International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "T. Mikolov", "I. Sutskever", "K. Chen", "G. Corrado", "J. Dean" ], "title": "Distributed representations of words and phrases", "venue": "Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "C.E. Moody" ], "title": "Mixing dirichlet topic models and word embeddings to make lda2vec", "venue": "[cs.CL],", "year": 2016 }, { "authors": [ "T.K. Moon" ], "title": "The expectation-maximization algorithm", "venue": "IEEE Signal processing magazine,", "year": 1996 }, { "authors": [ "D. Nadeau", "S. Sekine" ], "title": "A survey of named entity recognition and classification", "venue": "Lingvisticae Investigationes,", "year": 2007 }, { "authors": [ "A. Popescul", "D.M. Penncock", "S. Lawrence" ], "title": "The missing link-a probabilistic model of document content and hypertext connectivity", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2001 }, { "authors": [ "P. Quelhas", "J.-M. Odobez", "D. Gatica-Perez", "T. Tuytelaars" ], "title": "A thousand words in a scene", "venue": "Transactions on Pattern Analysis and Machine Intelligence (PAMI),", "year": 2007 }, { "authors": [ "S. Fabrizio" ], "title": "Machine learning in automated text categorization", "venue": "ACM computing surveys,", "year": 2002 }, { "authors": [ "R.R. Salakhutdinov", "G.E. Hinton" ], "title": "Replicated softmax: an undirected topic model", "venue": "Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "U. Shalit", "D. Weinshall", "G. Chechik" ], "title": "Modeling musical influence with topic models", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2013 }, { "authors": [ "N. Srivastava", "R. Salakhutdinov", "G. Hinton" ], "title": "Modelling documents with deep boltzmann machine", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2013 }, { "authors": [ "J. Varadarajan", "R. Emonet", "J.-M. Odobez" ], "title": "A sequential topic model for mining recurrent activities from long term video", "venue": "logs. International Journal of Computer Vision, pp", "year": 2013 }, { "authors": [ "P. Vincent", "H. Larochelle", "I. Lajoie", "Y. Bengio P.E. Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "D. Vrandečić", "M. Krötzsch" ], "title": "Wikidata: a free collaborative knowledge base", "venue": "Communications of the ACM,", "year": 2014 }, { "authors": [ "I. Vulic", "W. De Smet", "J. Tang", "M.-F. Moens" ], "title": "Probabilistic topic models in a multilingual settings: an overview of its methodology and application", "venue": "Elsevier Information Processing and Management,", "year": 2015 }, { "authors": [ "L. Wan", "L. zhu", "R. Fergus" ], "title": "A hybrid neural network topic model", "venue": "International Conference on Artificial Intelligence and Statistics(AISTATS),", "year": 2011 }, { "authors": [ "C. Wang", "D.M. Blei", "J.D. Lafferty" ], "title": "Continuous time dynamic topic models", "venue": "International Conference on Machine Learning (ICML),", "year": 2008 }, { "authors": [ "L. Yao", "Y. Zhang", "B. Wei", "Z. Jin", "R. Zhang", "Q. Chen" ], "title": "Incorporating knowledge graph embedding into topic modelling", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Nowadays, with the digital era, electronic text corpora are ubiquitous. These corpora can be company emails, news groups articles, online journal articles, Wikipedia articles, video metadata (titles, descriptions, tags). These corpora can be very large, thus requiring automatic analysis methods that are investigated by the researchers working on text content analysis (Collobert et al., 2011; Cambria & White, 2014). Investigated methods are about named entity recognition, text classification, etc (Nadeau & Sekine, 2007; S., 2002).\nAn important problem in text analysis is about structuring texts corpora around topics (Daud et al., 2010; Liu & Zhang, 2012). Developed tools would allow to summarize very large amount of text documents into a limited, human understandable, number of topics. In computer science many definitions of the concept of topic can be encountered. Two definitions are very popular. The first one defines a topic as an entity of a knowledge graph such as Freebase or Wikidata (Bollacker et al., 2008; Vrandečić & Krötzsch, 2014). The second one defines a topic as probability distribution over words of a given vocabulary (Hofmann, 2001; Blei et al., 2003). When topics are represented as knowledge graph entities, documents can be associated to identified concepts with very precise meaning. The main drawback is that knowledge graphs are in general composed of a very large number of entities. For example, in 2019, Wikidata counts about 40 million entities. Automatically identifying these entities requires building extreme classifiers trained with expensive labelled data (Puurula et al., 2014; Liu et al., 2017). When topics are defined as probability distribution over words of vocabulary, they can be identified using unsupervised methods that automatically extract them from text corpora. A precursor of such methods is the latent semantic analysis (LSA) model which is based on the word-document co-occurrence counts matrix factorization (Dumais, 1990). Since then, LSA has been extended to various probabilistic based models (Hofmann, 2001; Blei et al., 2003), and more recently to neural network based models (Salakhutdinov & Hinton, 2009; Larochelle & Lauly, 2012; Wan et al., 2011; Yao et al., 2017; Dieng et al., 2017).\nIn this paper, we propose a novel neural network based model to automatically, in an unsupervised fashion, discover topics in a text corpus. The first variation of the model is based on a neural\nnetworks that uses as input or parameters documents, words, and topics discrete lookup table embedding to represent probabilities of words given documents, probabilities of words given topics, and probabilities of topics given documents. However because in a given corpus, the number of documents can be very large, discrete lookup table embedding explicitly associating to each document a given embedded vector can be unpractical. For example, for the case of online stores such as Amazon, or video platforms such as Dailymotion or Youtube, the number of documents are in the order of billions. To overcome this limitation, we propose a model that generates continuous document embedding using a neural auto-encoder (Kingma & Welling, 2013). Our neural topic models are trained using cross entropy loss exploiting probabilistic latent semantic analysis (PLSA) assumptions stating that given topics, words and documents can be considered independent.\nThe proposed models are evaluated on six datasets: KOS, NIPS, NYtimes, TwentyNewsGroup, Wikipedia English 2012, and Dailymotion English. The four first datasets are classically used to benchmark topic models based on bag-of-word representation (Dua & Graff, 2017). Wikipedia, and Dailymotion are large scale datasets counting about one million documents. These latter datasets are used to qualitatively assess how our models behave on large scale datasets. Conducted experiments demonstrate that the proposed models are effective in discovering latent topics. Furthermore, evaluation results show that our models achieve lower perplexity than latent Dirichlet allocation (LDA) trained on the same datasets.\nThe remainder of this paper is organized as follows. Section 2 discusses related work. Section 3 briefly presents principles of topics generation with PLSA. Section 4 presents the first version of the model we propose which is based on discrete topics, documents, and words embedding. Section 5 gives details about the second version of the model which is based on embedding documents using a continuous neural auto-encoder. Section 6 provides details about the experiments conducted to assess the effectiveness of the proposed models. Finally Section 7 derives conclusions and gives future research directions." }, { "heading": "2 RELATED WORK", "text": "Unsupervised text analysis with methods related to latent semantic analysis (LSA) has a long research history. Latent semantic analysis takes a high dimensional text vector representation and apply linear dimensionality reduction methods such as singular value decomposition (SVD) to the word-document counts matrix (Dumais, 1990). The main drawback of LSA is related to it’s lack of statistical foundations limiting the model interpretability.\nProbabilistic latent semantic analysis (PLSA) was proposed by Hofmann (2001) to ground LSA on solid statistical foundations. PLSA is based on a well defined generative model for text generation based on the bag-of-words assumption. PLSA can be interpreted as a probabilistic matrix factorisation of the word-document counts matrix. Because PLSA model defines a probabilistic mixture model, it’s parameters can be estimated using the classical Expectation-Maximization (EM) algorithm (Moon, 1996). PLSA has been exploited in many applications related to text modelling by Hofmann (2001), to collaborative filtering by Popescul et al. (2001), to web links analysis by Cohn & Hofmann (2001), and to visual scene classification by Quelhas et al. (2007).\nThe main drawback of PLSA is that it is a generative model of the training data. It does not apply to unseen data. To extend PLSA to unseen data, Blei et al. (2003) proposed the latent Dirichlet Allocation (LDA) which models documents via hidden Dirichlet random variables specifying probabilities on a lower dimensional hidden spaces. The distribution over words of an unseen document is a continuous mixture over document space and a discrete mixture over all possible topics. Modeling with LDA has been thoroughly investigated resulting in dynamic topic models to account for topics temporal dynamics by Blei & Lafferty (2006); Wang et al. (2008); Shalit et al. (2013); Varadarajan et al. (2013); Farrahi & Gatica-Perez (2014), hierarchical topic models to account topic hierarchical structures by Blei et al. (2004), and multi-lingual topic model to account for multi-lingual corpora by Boyd-Grabber & Blei (2009); Vulic et al. (2015), and supervised topic model to account for corpora composed by categorized documents (Blei & McAuliffe, 2008). Beside text modelling, LDA has been applied to discover people’s socio-geographic routines from mobiles phones data by Farrahi & Gatica-Perez (2010; 2011; 2014), mining recurrent activities from long term videos logs by Varadarajan et al. (2013).\nLearning a topic models based on LSA, PLSA or LDA requires considering jointly all words, documents, and topics. This is a strong limitation when the vocabulary and the number of documents are very large. For example, for PLSA or LDA, learning the model requires maintaining a large matrix containing the probability of a topics given words and documents (Hofmann, 2001; Blei et al., 2003). To overcome this limitation Hoffman et al. (2010) proposed online training of LDA models using stochastic variational inference.\nRecently, with the rise of deep learning with neural networks that are trained using stochastic gradient descent on sample batches, novel topic models based on neural networks have been proposed. Salakhutdinov & Hinton (2009) proposed a two layer restricted Boltzmann machine (RBM) called the replicated-softmax to extract low level latent topics from a large collection of unstructured documents. The model is trained using the contrastive divergence formalism proposed by CarreiraPerpiñán & Hinton (2005). Benchmarking the model performance against LDA showed improvement in term on unseen documents’ perplexity and accuracy on retrieval tasks. Larochelle & Lauly (2012) proposed a neural auto-regressive topic model inspired from the replicated softmax model but replacing the RBM model with neural auto-regressive distribution estimator (NADE) which is a generative model over vectors of binary observations (Larochelle & Murray, 2011). An advantage of the NADE over the RBM is that during training, unlike for the RBM, computing the data negative log-likelihood’s gradient with respect to the model parameters does not requires Monte Carlo approximation. Srivastava et al. (2013) generalized the replicated softmax model proposed by Salakhutdinov & Hinton (2009) to deep RBM which has more representation power.\nCao et al. (2015) proposed neural topic model (NTM), and it’s supervised extension (sNTM) where words and documents embedding are combined. This work goes beyond the bag-of-words representation by embedding word n-grams with word2vec embedding as proposed by Mikolov et al. (2013). Moody (2016) proposed the lda2vec, a model combining Dirichlet topic model as Blei et al. (2003)) and word embedding as Mikolov et al. (2013). The goal of lda2vec is to embed both words and documents in the same space in order to learn both representations simultaneously.\nOther interesting works combine probabilistic topic models such as LDA, and neural network modelling (Wan et al., 2011; Yao et al., 2017; Dieng et al., 2017). Wan et al. (2011) proposed a hybrid model combining a neural network and a latent topic model. The neural network provides a lower dimensional embedding of the input data , while the topic model extracts further structure from the neural network output features. The proposed model was validated on computer vision tasks. Yao et al. (2017) proposed to integrate knowledge graph embedding into probabilistic topic modelling by using as observation for the probabilistic topic model document-level word counts and knowledge graph entities embedded into vector forms. Dieng et al. (2017) integrated to a recurrent neural network based language model global word semantic information extracted using a probabilistic topic model." }, { "heading": "3 TOPIC MODELLING WITH PROBABILISTIC LATENT SEMANTIC ANALYSIS", "text": "Probabilistic latent semantic analysis (PLSA) proposed by Hofmann (2001) is based on the bag-ofwords representation defined in the following." }, { "heading": "3.1 BAG OF WORDS REPRESENTATION", "text": "The grounding assumption of the bag-of-word representation is, that for text content representation, only word occurrences matter. Word orders can be ignored without harm to understanding.\nLet us assume available a corpus of documents D = {doc1, doc2, ..., doci, ..., docI}. Every document is represented as the occurrence count of words of a given vocabulary W = {word1,word2, ...,wordn, ...,wordN}. Let us denote by c(wordn, doci) the occurrence count of the n’th vocabulary word into the i’th document. The normalized bag-of-words representation of the i’th document is given by the empirical word occurrences probabilities:\nfni = c(wordn, doci)∑N\nm=1 c(wordm, doci) , n = 1, ..., N. (1)\nWith the bag-of-words assumption, fni, is an empirical approximation of the probability that wordn appears in document doci denoted p(wordn|doci)." }, { "heading": "3.2 PROBABILISTIC LATENT SEMANTIC ANALYSIS", "text": "Probabilistic latent semantic analysis (PLSA) modelling is based on the assumption that there is a set unobserved topics T = {top1, top2, ..., topK} that explains occurrences of words in documents. Given topics, words and documents can be assumed independent. Thus, under the PLSA assumption the probability of the occurrence a word wordn in a document doci can be decomposed as:\np(wordn|doci) = K∑\nk=1\np(wordn|topk)p(topk|doci) (2)\nHofmann (2001) used the expectation maximization algorithm (Moon, 1996) to estimate probabilities of words given topics p(wordn|topk), and probabilities of topics given documents p(topk|doci). It is important to note that PLSA, as well as LDA, are trained on the raw counts matrix (c(wordm, doci)) and not the normalized counts matrix (fni). The normalized counts matrix are used by the models we propose the following sections." }, { "heading": "4 DISCRETE NEURAL TOPIC MODEL", "text": "The discrete neural topic model we propose is based on a neural network representation of probabilities involved in representing the occurrences of words in documents: p(wordn|doci), p(wordn|topk), and p(topk|doci). These probabilities are parametrized by the documents, the words, and the topics discrete lookup table embeddings.\nLets us denote by xi = (xdi)Dd=1 a D-dimensional 1 embedded vector representing the i’th document doci. Similarly, we define yn = (ydn)Dd=1, and zk = (zdk) D d=1 D-dimensional embedded vectors respectively representing word wordn and topic topk. Using these discrete lookup embeddings as parameters, the probability of words given documents can be written as:\np(wordn|doci) = exp(yᵀnxi + bn)∑N\nm=1 exp(y ᵀ mxi + bm)\n. (3)\nSimilarly, probabilities of words given a topic are defined as:\np(wordn|topk) = exp(yᵀnzk + bn)∑N\nm=1 exp(y ᵀ mzk + bm)\n, (4)\n1xi is a column vector in RD .\nand the probability of a topic given a document is defined as\np(topk|doci) = exp(zᵀkxi + bk)∑K l=1 exp(z ᵀ l xi + bl)\n(5)\nIn Equations 3, 4, 5, and in following equations, although different, all neural networks biases are denoted by b. We used this convention to avoid burdening the reader with too many notations.\nFigure 1a, Figure 1b and 1c give schematic representation of the neural network representation of probabilities of words given documents, probabilities of words given topics, and topics given documents. It is worth noticing that, because of the form of the probability of words given documents (see Equation 3) which is based on scalar product between word and document vectors, the higher probability of a word occurring in a document, the closer it’s vector yn to the document’s vector xi will be. Similar analysis can be derived about the proximity of word and topic vectors, and topic and document vectors.\nThe probabilities of words given topics (4), and topics given documents (5) can be combined according to the PLSA assumptions (Equation 2) to recover probabilities of the words given documents as:\np(wordn|doci) = K∑\nk=1 exp(yᵀnzk + bn)∑N m=1 exp(y ᵀ mzk + bm) × exp(zᵀkxi + bk)∑K l=1 exp(z ᵀ l xi + bl)\n(6)\nTo train the model, we optimize, using stochastic gradient descent, according to a cross entropy loss, embedding and biases parameters so that probabilities of words given documents in Equations 3, and 6 match the empirical bag-of-words frequencies defined in Equation 1." }, { "heading": "5 CONTINUOUS NEURAL TOPIC MODEL", "text": "The discrete neural topic model described in Section 4 has two main drawbacks. First, it only models training data, and can not be applied to unseen data. Second, it requires building an explicit vector representation xi for every document i = 1, 2, ..., I . In practise the number of documents can be very large, possibly in the order of billions. A solution to these issues is to use continuous embeddings to represent document instead of discrete lookup table embeddings (Vincent et al., 2010).\nContinuous document embeddings are built using a neural auto-encoder representation that maps input documents doci represented by their word empirical frequencies fni onto themselves through a D-dimensional bottleneck layer xi = (xdi)Dd=1 which is then taken as the document embedding. This is done as:\nσ\n( N∑\nn=1\nydnfni + bdi ) = xdi (7)\nexp( ∑D\nd=1 ỹdnxdi + b̃dn)∑N m=1 exp( ∑D d=1 ỹdmxid + b̃dm) = fni (8)\nwhere σ is the rectified linear (ReLU) unit activation function. Variables y = (ydn), and ỹ = (ỹdn) are neural network parameters, and y is taken to be word embeddings.\nFigure 1b gives a schematic visualization of the continuous document embedding model. Because of it’s continuous embeddings, this model can encode an unlimited number of documents as far as embedding dimension D is large enough.\nSimilarly then for the discrete topic model, documents, words, and topics vector representation xi and yn, and topics vectors zk are combined to compute probabilities of words given topics using Equation 4, probabilities of topics given documents using Equation 5, and probabilities of words given documents using Equation 6.\nTo train the continuous neural topic model we optimized, using stochastic gradient descent, with respect to a cross entropy loss, parameters xi, yn and zk such that the models in Equations 7 and 6 match the empirical bag-of-words frequencies in Equation 1.\nIt has to be noticed, apart from biases, our models parameters are constituted by the embedding parameters. This allows to build a model with a limited set of parameters, exploiting parameters sharing as regularization procedure. For the auto-encoder model in Equations 7 we chose different encoding (y) and decoding (ỹ) parameters to avoid over-constraining the model. However, if further reduction of the model number of parameters is targeted, these two variables can be considered as the transposed of one another." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 EVALUATION PROTOCOL", "text": "We evaluated our models on six datasets. Four of them are classical dataset used to evaluate bag-ofwords models: NIPS full papers, KOS blog entries, NYTimes news articles, and the Twenty News Group dataset. The three first datasets can be obtained from the UCI machine learning repository created by Dua & Graff (2017). The Twenty News Group dataset is part of datasets available with the well kown Python Scikit-Learn package.\nThe two other datasets are the Wikipedia English 2012, and the Dailymotion English used to assess qualitatively how our models perform on datasets with very large number of documents. Apart from the Dailymotion dataset, all other ones are publicly available, and can be used for model benchmarking. Table 1 gives the corpora’s statistics. These corpora are very diverse in term of corpus sizes, vocabulary sizes, and document contents.\nWe evaluate the discrete neural topic model (D-NTM) presented in Section 4, and it’s continuous extension (C-NTM) presented in Section 5. These models are compared to the latent Dirichlet allocation (LDA) model, developed by Blei et al. (2003), taken as baseline. We considered this baseline as it outperforms the PLSA models. We used the LDA implementation available in the Python Scikit-Learn package based on Hoffman et al. (2010) implementation.\nTo assess the performances of the models, as proposed by Hofmann (2001); Blei et al. (2003), we use the perplexity measure defined as:\npp = exp− ∑I\ni=1 log p(word1, ...,wordn, ...,wordNi , doci)∑M d=1Ni\n(9)\nwhere word1, ...,wordn, ...,wordNi is the sequence of possibly duplicated words composing the i’th document doci, and :\np(word1, ...,wordn, ...,wordNi , doci) = Ni∏ n=1 K∑ k=1 p(wordn|topk)p(topk|doci)p(doci) (10)\nThe perplexity represents the exponential of data negative log-likelihood of estimated models. Thus, the smaller the better. This measure is classically used to assess language models, and topic models performances.\nOur models comprise two hyper-parameters: the embedding dimension D, and the number of topics K. As we are optimizing our models using stochastic gradient descent, their training involves three parameters: a learning rate set to λ = 0.01, a number of descent steps set to = 100, and a batch size that was set to 64. Our models were implemented in the Tensorflow framework. Neural network parameters were initialized with Xavier initializers, and model optimization is performed with Adam Optimizers 2.\n2Our model implementation and benchmarking scripts will be made available upon conference reviews." }, { "heading": "6.2 RESULTS", "text": "We investigate neural topic models training performances for varying embedding dimension D and number of topics K. We tested number of topics of K = 50, 100, 200, 300, and embedding dimensions of D = 100, 200, 300.\nTable 2 gives training perplexity for the models on the KOS, the NIPS, and the Twenty News Group datasets. These results show that the training perplexity decreases with the number of topics until a number where it stagnates. Also, the training perplexity is higher when the embedding dimension is about a 100, while for 200 and 300, exhibit close perplexity values. This trend with perplexity decreasing with increasing embedding dimension and number of topics is expected as larger dimension implies higher neural network learning capacity.\nTable 2 also gives the comparison of training perplexity between D-NTM, C-NTM, and LDA. These results show that training perplexity is much lower for neural network based topic model than for LDA. They also show that, in general, D-NTM is more efficient at achieving low perplexity than C-NTM. For an embedding dimension D = 300, for the KOS and the NIPS dataset, the D-NTM model achieves better performances, while for the Twenty News Group, the C-NTM achieves better performances.\nFigure 2 gives samples topics discovered using the continuous neural topic model (C-NTM) over large scale dataset: NYtimes, Wikipedia 2012, and Dailymotion. We only considered this model as it scales better than the discrete neural topic model (D-NTM) to large scale datasets. Discovered topics are displayed in form of word clouds where the size of each word is proportional to the probability the words occurs in the considered topic p(wordn|topk). This figure shows that the model find relevant topics. For NYtimes, the discovered topics are about energy plants, medecine, and court law. For Wikipedia displayed topics are about books and novels, universities and schools, and new species. For Dailymotion, discovered topics are about movies, videos productions, and Super Bowl. These qualitative results show that found topics are consistent and centered around concepts a human being can identify and expect. These examples are just few sample topics, other non displayed topics are about news, sport, music, religion, science, economy, etc." }, { "heading": "7 CONCLUSIONS", "text": "In this paper we presented a novel neural topic model. The proposed model has two variations. The first variation is based on discrete documents, words, and topics discrete lookup table embeddings. The second variation exploits continuous neural auto-encoder embedding to allow scaling to very large corpora. Proposed models are evaluated on six datasets. Conducted evaluation demonstrate that proposed models outperform LDA with respect to a perplexity metric.\nThe proposed model can be extend in many directions. The continuous document embedding model is based on a simple single-hidden-layer auto-encoder. The use of more sophisticated models such as variational auto-encoders such as proposed by Kingma & Welling (2013) could be investigated. In the direction of using more sophisticated neural networks, proposed probabilities of words given topics and topics given documents models could be represented with deeper neural networks which are known to have high representation power. This could lead to decisive improvements, specially for large scale corpora.\nAnother possible direction would be to integrate proposed neural topic models into models combining probabilistic topic models and neural network model such as done by Dieng et al. (2017) who combines LDA to capture global words semantic to recurrent neural network language models. This would allow design a model into a single neural network framework. The model would be fully trainable with stochastic gradient descent on sample batches." } ]
2,019
DISCOVERING TOPICS WITH NEURAL TOPIC MODELS BUILT FROM PLSA LOSS
SP:a396624adb04f88f4ba9d10a7968be1926b5d226
[ "In this paper the authors propose an end-to-end policy for graph placement and partitioning of computational graphs produced \"under-the-hood\" by platforms like Tensorflow. As the sizes of the neural networks increase, using distributed deep learning is becoming more and more necessary. Primitives like the one suggested by the authors are very important in many ways, including improving the ability of the NN to process more data, reduce energy consumption etc. The authors compared to prior work propose a method that can take as input more than one data flow graphs, and learns a policy for graph partitioning/placement of the operations in a set of machines that minimizes the makespan. This problem in principle is NP-hard as it entails both graph partitioning and graph scheduling as its components. The authors propose a heuristic that composes of two existing methods: graph neural networks are used to produce an embedding of the computation/data flow graph, and then a seq-2-seq placement network. The method is able to generalize to unseen instances.", "This work proposes to use a combination of graph neural networks (GNNs) and proximal policy optimization (PPO) to train policies for generalized device placement in dataflow graphs. Essentially, (1) a GNN is used to learn representations of a dataflow graph (in an inductive manner), (2) a transformer is used to output a device placement action for each node in the graph, and (3) the entire system is trained end-to-end via PPO. Extensive experimental results show very impressive results compared to strong baselines." ]
Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15× faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.
[ { "affiliations": [], "name": "DATAFLOW GRAPHS" } ]
[ { "authors": [ "Ravichandra Addanki", "Shaileshh Bojja Venkatakrishnan", "Shreyan Gupta", "Hongzi Mao", "Mohammad Alizadeh" ], "title": "Placeto: Learning generalizable device placement algorithms for distributed machine learning", "venue": "CoRR, abs/1906.08879,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio. Neural machine translation by jointly learning to align", "translate. In ICLR" ], "title": "2015", "venue": "URL https://arxiv.org/abs/ 1409.0473.", "year": 2015 }, { "authors": [ "Brian Cheung", "Alex Terekhov", "Yubei Chen", "Pulkit Agrawal", "Bruno A. Olshausen" ], "title": "Superposition of many models into one. CoRR, abs/1902.05522, 2019", "venue": null, "year": 1902 }, { "authors": [ "Zihang Dai" ], "title": "Improving deep generative modeling with applications", "venue": null, "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime G. Carbonell", "Quoc V. Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 2019 }, { "authors": [ "Yuanxiang Gao", "Li Chen", "Baochun Li" ], "title": "Spotlight: Optimizing device placement for training deep neural networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs. NIPS, 2017", "venue": "URL http://arxiv.org/abs/1706.02216", "year": 2017 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable, empirically", "venue": "arXiv preprint arXiv:1712.00409,", "year": 2017 }, { "authors": [ "Yanping Huang", "Yonglong Cheng", "Dehao Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V. Le", "Zhifeng Chen" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline", "venue": "parallelism. CoRR,", "year": 2018 }, { "authors": [ "Zhihao Jia", "Matei Zaharia", "Alex Aiken" ], "title": "Beyond data and model parallelism for deep neural networks. SysML, 2018", "venue": "URL http://arxiv.org/abs/1807.05358", "year": 2018 }, { "authors": [ "Rafal Jozefowicz", "Oriol Vinyals", "Mike Schuster", "Noam Shazeer", "Yonghui Wu" ], "title": "Exploring the limits of language modeling", "venue": "arXiv preprint arXiv:1602.02410,", "year": 2016 }, { "authors": [ "George Karypis", "Vipin Kumar" ], "title": "A fast and high quality multilevel scheme for partitioning irregular graphs", "venue": "SIAM J. Sci. Comput.,", "year": 1998 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Azalia Mirhoseini", "Hieu Pham", "Quoc V. Le", "Benoit Steiner", "Rasmus Larsen", "Yuefeng Zhou", "Naveen Kumar", "Mohammad Norouzi", "Samy Bengio", "Jeff Dean" ], "title": "Device placement optimization with reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Azalia Mirhoseini", "Anna Goldie", "Hieu Pham", "Benoit Steiner", "Quoc V. Le", "Jeff Dean" ], "title": "A hierarchical model for device placement", "venue": null, "year": 2018 }, { "authors": [ "Boris N. Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "TADAM: task dependent adaptive metric for improved few-shot learning", "venue": "CoRR, abs/1805.10123,", "year": 2018 }, { "authors": [ "Aditya Paliwal", "Felix Gimeno", "Vinod Nair", "Yujia Li", "Miles Lubin", "Pushmeet Kohli", "Oriol Vinyals" ], "title": "REGAL: transfer learning for fast optimization of computation graphs", "venue": null, "year": 2019 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V. Le" ], "title": "Regularized evolution for image classifier architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "CoRR, abs/1409.3215,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jonathon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer", "venue": "vision. CoRR,", "year": 2015 }, { "authors": [ "Aäron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew W. Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "CoRR, abs/1609.03499,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks? ICLR, 2019", "venue": "URL http://arxiv.org/abs/1810.00826", "year": 2019 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Rex Ying", "Vijay S. Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "CoRR, abs/1806.02473,", "year": 2018 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever", "Oriol Vinyals" ], "title": "Recurrent neural network regularization", "venue": "CoRR, abs/1409.2329,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have demonstrated remarkable scalability–improved performance can usually be achieved by training a larger model on a larger dataset (Hestness et al., 2017; Shazeer et al., 2017; Jozefowicz et al., 2016; Mahajan et al., 2018; Radford et al.). Training such large models efficiently while meeting device constraints, like memory limitations, necessitate partitioning of the underlying dataflow graphs for the models across multiple devices. However, devising a good partitioning and placement of the dataflow graphs requires deep understanding of the model architecture, optimizations performed by domain-specific compilers, as well as the device characteristics, and is therefore extremely hard even for experts.\nML practitioners often rely on their understanding of model architecture to determine a reasonable partitioning and placement for graphs. However, relying solely on the model architecture while ignoring the effect of the partitioning on subsequent compiler optimizations like op-fusion can lead to sub-optimal placements and consequently under-utilization of available devices. The goal of automated device placement is to find the optimal assignment of operations to devices such that the end-to-end execution time for a single step is minimized and all device constraints like memory limitations are satisfied. Since this objective function is non-differentiable, prior approaches (Mirhoseini et al., 2017; 2018; Gao et al., 2018) have explored solutions based on reinforcement learning (RL). However, these RL policies are usually not transferable and require training a new policy from scratch for each individual graph. This makes such approaches impractical due to the significant amount of compute required for the policy search itself, at times offsetting gains made by the reduced step time.\nIn this paper, we propose an end-to-end deep RL method for device placement where the learned policy is generalizable to new graphs. Specifically, the policy network consists of a graph-embedding network that encodes operation features and dependencies into a trainable graph representation, followed by a scalable sequence-to-sequence placement network based on an improved Transformer (Vaswani et al., 2017; Dai et al., 2019). The placement network transforms the graph representations into a placement decision with soft attention, removing hard constraints such as hierarchical\ngrouping of operations (Mirhoseini et al., 2018) or co-location heuristics (to reduce the placement complexity) (Mirhoseini et al., 2017). Both of our graph-embedding network and placement network can be jointly trained in an end-to-end fashion using a supervised reward, without the need to manipulate the loss functions at multiple levels. We empirically show that the network learns flexible placement policies at a per-node granularity and can scale to problems over 50,000 nodes.\nTo generalize to arbitrary and held-out graphs, our policy is trained jointly over a set of dataflow graphs (instead of one at a time) and then fine-tuned on each graph individually. By transferring the learned graph embeddings and placement policies, we are able to achieve faster convergence and thus use less resources to obtain high-quality placements. We also use super-positioning, i.e., a feature conditioning mechanism based on the input graph embeddings, to effectively orchestrate the optimization dynamics of graphs with drastically different sizes in the same batch.\nOur contributions can be summarized as follows:\n1. An end-to-end device placement network that can generalize to arbitrary and held-out graphs. This is enabled by jointly learning a transferable graph neural network along with the placement network.\n2. A scalable placement network with an efficient recurrent attention mechanism, which eliminates the need for an explicit grouping stage before placement. The proposed end-to-end network provides 15× faster convergence as compared to the hierarchical LSTM model used in earlier works (Mirhoseini et al., 2017; 2018).\n3. A new batch pre-training and fine-tuning strategy based on network superposition, which leads to improved transferability, better placements especially for larger graphs, and 10× reduction in policy search time as compared to training individual graphs from scratch.\n4. Superior performance over a wide set of workloads, including InceptionV3 (Szegedy et al., 2015), AmoebaNet (Real et al., 2018), RNNs, GNMT (Wu et al., 2016), Transformer-XL (Dai et al., 2019), WaveNet (van den Oord et al., 2016), and more." }, { "heading": "2 RELATED WORK", "text": "Device Placement Reinforcement learning has been used for device placement of a given dataflow graph (Mirhoseini et al., 2017) and demonstrated run time reduction over human crafted placement and conventional heuristics. For improved scalability, a hierarchical device placement strategy (HDP) (Mirhoseini et al., 2018) has been proposed that clusters operations into groups before placing the operation groups onto devices. Spotlight (Gao et al., 2018) applies proximal policy optimization and cross-entropy minimization to lower training overhead. Both HDP and Spotlight rely on LSTM controllers that are difficult to train and struggle to capture very long-term dependencies over large graphs. In addition, both methods are restricted to process only a single graph at a time, and cannot generalize to arbitrary and held-out graphs. Placeto (Addanki et al., 2019) represents the first attempt to generalize device placement using a graph embedding network. But like HDP, Placeto also relies on hierarchical grouping and only generates placement for one node at each time step. Our approach (GDP) leverages a recurrent attention mechanism and generates the whole graph placement at once. This significantly reduces the training time for the controller. We also demonstrate the generalization ability of GDP over a wider set of important workloads.\nParallelization Strategy Mesh-TensorFlow is a language that provides a general class of distributed tensor computations. While data-parallelism can be viewed as splitting tensors and operations along the “batch” dimension, in Mesh-TensorFlow the user can specify any tensor-dimensions to be split across any dimensions of a multi-dimensional mesh of processors. FlexFlow (Jia et al., 2018) introduces SOAP, a more comprehensive search space of parallelization strategies for DNNs which allows parallelization of a DNN in the Sample, Operator, Attribute, and Parameter dimensions. It uses guided randomized search of the SOAP space to find a parallelization strategy for a specific parallel machine. GPipe (Huang et al., 2018) proposed pipeline parallelism, by partitioning a model across different accelerators and automatically splitting a mini-batch of training examples into smaller micro-batches. By pipelining the execution across micro-batches, accelerators can operate in parallel. Our GDP focuses on a general deep RL method for automating device placement on arbitrary graphs, and is therefore orthogonal to existing parallelization strategies.\nCompiler Optimization REGAL (Paliwal et al., 2019) uses deep RL to optimize the execution cost of computation graphs in a static compiler. The method leverages the policy’s ability to transfer to new graphs to improve the quality of the genetic algorithm for the same objective budget. However, REGAL only targets peak memory minimization while GDP focuses on graph run time and scalability while also meeting the peak memory constraints of the devices. Specifically, we generalize graph partitioning and placement into a single end-to-end problem, with and without simulation, which can handle graphs with over 50,000 nodes." }, { "heading": "3 END-TO-END PLACEMENT POLICY", "text": "Given a dataflow graph G(V,E) where V represents atomic computational operations (ops) and E represents the data dependency, our goal is to learn a policy π : G 7→ D that assigns a placement D ∈ D for all the ops in the given graph G ∈ G, to maximize the reward rG,D defined based on the run time. D is the allocated devices that can be a mixture of CPUs and GPUs. In this work, we represent policy πθ as a neural network parameterized by θ.\nUnlike prior works that focus on a single graph only, the RL objective in GDP is defined to simultaneously reduce the expected runtime of the placements over a set of N dataflow graphs:\nJ(θ) = EG∼G,D∼πθ(G)[rG,D] ≈ 1\nN ∑ G ED∼πθ(G)[rG,D] (1)\nIn the following, we refer to the case whenN = 1 as individual training and the case whenN > 1 as batch training. We optimize the objective above using Proximal Policy Optimization (PPO) (Schulman et al., 2017) for improved sample efficiency.\nFigure 1 shows an overview of the proposed end-to-end device placement network. Our proposed policy network πθ consists a graph embedding network that learns the graphical representation of any dataflow graph, and a placement network that learns a placement strategy over the given graph embeddings. The two components are jointly trained in an end-to-end fashion. The policy p(a|G) is applied to make a set of decisions at each node. These decisions, denoted as av for each v ∈ V across all nodes, form one action a = {av∈V }. One decision corresponds to playing one arm of a multi-bandit problem, and specifying the entire a corresponds to playing several arms together in a single shot. Note the architecture is designed to be invariant over the underlying graph topology, enabling us to apply the same learned policy to a wide set of input graphs with different structures." }, { "heading": "3.1 GRAPH EMBEDDING NETWORK", "text": "We leverage graph neural networks (GNNs) (Hamilton et al., 2017; Xu et al., 2019; You et al., 2018) to capture the topological information encoded in the dataflow graph. Most graph embedding frameworks are inherently transductive and can only generate embeddings for a given fixed graph. These transductive methods do not efficiently extrapolate to handle unseen nodes (e.g., in evolving graphs), and cannot learn to generalize to unseen graphs. GraphSAGE (Hamilton et al., 2017) is an inductive framework that leverages node attribute information to efficiently generate representations\non previously unseen data. While our proposed framework is generic, we adopt the feature aggregation scheme proposed in GraphSAGE to model the dependencies between the operations and build a general, end-to-end device placement method for a wide set of dataflow graphs.\nIn GDP, nodes and edges in the dataflow graph are represented as the concatenation of their meta features (e.g., operation type, output shape, adjacent node ids) and are further encoded by the graph embedding network into a trainable representation. The graph embedding process consists of multiple iterations, and the computation procedure for the l-th iteration can be outlined as follows:\nFirst, each node v ∈ V aggregates the feature representations of its neighbors, {h(l)u ,∀u ∈ N (v)}, into a single vector h(l)N (v). This aggregation outcome is a function of all previously generated representations, including the initial representations defined based on the input node features. In this work, we use the following aggregation function with max pooling:\nh (l) N (v) = max(σ(W (l)h(l)u + b (l)),∀u ∈ N (v)) (2)\nwhere (W (l), b(l)) define an affine transform and σ stands for the sigmoid activation function. We then concatenate the node’s current representation, h(l)v , with the aggregated neighborhood vector, h (l) N (v), and feed this concatenated vector through a fully connected layer f (l+1)\nh(l+1)v = f (l+1)(concat(h(l)v , h (l) N (v))) (3)\nDifferent from GraphSAGE, parameters in our graph embedding network are trained jointly with a placement network via stochastic gradient descent with PPO, in a supervised fashion, as described in Section 3. That is, we replace the unsupervised loss with our task-specific objective." }, { "heading": "3.2 PLACEMENT NETWORK", "text": "The graph neural network works as a feature aggregation network that learns a trainable feature representation for the computational graph, we still need a policy network that produces actions on a per node basis. Given hv’s, the policy network produces av’s through conditionally independent predictions, where the prediction for one node v does not depend on the prediction of other nodes.\np(a|G) = ∏ v p(av|G) = ∏ v p(av|f(hv)) (4)\nWhile f can be represented using multilayer perceptrons (MLPs), where the MLPs is shared across all nodes for prediction the placement output distributions. However, MPLs lack a dependency tracking mechanism across nodes. In practise, the placement of one node can be determined by the placement of another node, where the placed node may consume a large size of data produced by the other node. Intuitively, an attention network can learn this dependency and the relative importance of dependencies across an entire graph. Therefore, we decide to use an attention-based placement network to better track inter-node placement-related dependencies.\nDesigning a scalable placement network that can generalize to graphs with thousands of nodes is challenging, as the conventional GNMT models proposed for language tasks usually target a shorter sequence length. Hierarchical placement (Mirhoseini et al., 2018) has been proposed to address this issue,however, the proposed grouper network comes with limited flexibility and generality. For example, the grouper network leverages an aggregated feature representation by averaging feature vectors for nodes within the same group. The non-differentiable grouping procedure prevents training the graph-embedding and placement networks end-to-end.\nTo remove the two-stage hierarchical workflow in HDP for improved scalability, we propose to use a Transformer-based attentive network to generate operation placements in an end-to-end fashion. As the graph embedding already contains spatial (topological) information for each node, we remove the positional embedding in the original transformer to prevent the model from overfitting node identifications. To capture long-term dependencies efficiently among a large set of nodes, we adopt segment-level recurrence introduced in Transformer-XL (Dai et al., 2019; Dai, 2019), where hidden states computed for the previous set of nodes are cached (with gradient flows disabled) and reused as an extended context during the training of the next segment. Besides achieving extra long context,\nwe empirically find the segment-level recurrent attention much faster than a conventional LSTMbased GNMT model. In our experimental evaluation, we compare both the performance and speed up of our placement network with that of the LSTM-based hierarchical device placement." }, { "heading": "3.3 BATCH TRAINING WITH PARAMETER SUPERPOSITION", "text": "Since the parameterization for the architecture of the end-to-end policy is designed to be invariant over input graphs with different topologies, the same placement policy can be shared across a wide set of workloads. We therefore propose a batch training strategy, and further enhance the aforementioned architecture to handle such generalization across graphs.\nNaı̈ve batch training is challenging in our context as different dataflow graphs contain different number of operations connected in different topologies. In addition, unlike previous device placement methods, GDP aims to handle graphs from potentially different application domains (e.g. computer vision, language, and speech), where the number of operations can range from a few thousand to one million. These graphs have drastically different network architecture, in terms of computational operations, data shape, and network topology. As an example, recurrent networks have completely different operation types and connections compared to multi-branch convolutional networks that are widely used in computer vision. It would be highly desirable to train a single shared network that maximizes information sharing across these heterogeneous tasks, without hurting the performance on each of them due to their distinct learning dynamics.\nAlong a similar direction of multi-task learning and few-shot learning (Oreshkin et al., 2018), we propose a feature conditioning mechanism similar to parameter superposition (Cheung et al., 2019). The idea is to train one shared policy, but condition its parameters based on the input features to mitigate the potentially undesirable interference among different input graphs. Since dense layers (affine transforms followed by nonlinearity) serve as the fundamental building blocks in all of our network components, we introduce an additional conditioning layer to enable superposition in all dense layers the placement network:\nx(l+1) = g(l)(c(x(0)) x(l)) (5) where g(l) stands for a dense layer in our policy network, c stands for the feature conditioning layer, and x(0) denotes the feature representation of the input graph generated by the graph-embedding network. The feature conditioning layer is implemented with minimum overhead by adding an additional transformer layer to our placement network." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 EXPERIMENT SETUP", "text": "In this section, we evaluate our training strategy on widely used machine learning models in computer vision, natural language processing, and speech domains. We compare our approach to human expert placement, TensorFlow METIS placement, and hierarchical device placement (HDP) (Mirhoseini et al., 2018). Our experiments are run on machines with one Intel Broadwell CPU and up to eight Nvidia P100 GPUs. Note that the prior work (Mirhoseini et al., 2017; 2018; Gao et al., 2018) were evaluated on different GPU devices, preventing direct comparison of results. Therefore, we re-evaluate HDP on our own system environment and report those numbers.\nThe performance of a placement is evaluated by the resulted training step time (run time) of the neural network. We use the negative square root of the normalized run time as the reward, where the run time is normalized with the best run time from a baseline. We use the average reward of all the previous trials as a bias term. The advantage value is computed by subtracting the reward by the average reward. During the search, we apply a large negative reward (-10) for invalid placements (e.g. a violation of co-location constraint, out of memory, etc.). For operation scheduling, we rely on the Tensorflow default FIFO scheduling." }, { "heading": "4.2 PERFORMANCE ON INDIVIDUAL GRAPHS", "text": "We evaluate GDP by training the model separately on six important graphs, including RNN Language Modeling, GNMT (Sutskever et al., 2014), Transformer-XL, Inception, AmoebaNet, and\nWaveNet. We name this approach GDP-one. For all the tasks, GDP-one consistently outperforms human expert placement, TensorFlow METIS (Karypis & Kumar, 1998) placement, and HDP. For extremely large graphs, GDP-one is only 6% worse on 8-layer NMT (over 60k nodes), compared to human placement, but is 6.8% better than HDP. Overall, GDP-one achieves on average more than 16% run time reduction across the evaluated 12 graphs, compared to human expert placement. Compared to hierarchical device placement, GDP-one achieves an average 9.2% speed up, and scales better to large graphs such as 8-layer NMT and 4-layer RNNLM. Importantly, with the efficient end-to-end training and sample efficient reinforcement learning algorithm, GDP-one has a 15x speed up in convergence time of the placement network over HDP." }, { "heading": "4.3 GENERALIZATION", "text": "GDP enables the training of multiple heterogeneous graphs in a single batch, sharing parameters in the graph-embedding network and the placement network. We name this training strategy GDPbatch. We empirically show that GDP-batch generates better placements for many workloads such as transformer-XL (7.6%), WaveNet (15%), and 8-layer GNMT (8%). Table 2 compares the run time of 11 tasks using GDP-batch, with the same end-to-end architecture as described in section 4.2. GDP-batch yields slightly better run time compared to GDP-one in majority of the tasks, while being only slightly worse on AmoebaNet. Compared to training graphs separately, GDP-batch reduces network parameters and enables transfer learning among different graphs.\nWe further evaluate the effect of transfer learning by mixing redundant tasks in a batch. We find that mixing different graphs such as RNNLM and GNMT models with different number of layers results in both faster and better learning for RNNLM and GNMT with large number of layers (8- layer). As a matter of fact, both Placeto (Addanki et al., 2019) and HDP had problems matching human placement performance for 8-layer GNMT or 8-layer RNNLM. With batch training, GDP is the first device placement work to match human expert performance for both 8-layer GNMT and 8-layer RNNLM. We also for the first time show that GDP-batch not only improves the search time (since we do not retrain the policy for every new graph), it can also improve the performance of the found placements. More detailed results are shown in Appendix Table 5.\nGeneralization to hold-out graphs: Here we show another set of experiments where we treat GDPbatch as a pre-training strategy and remove the target graph from the batch training dataset. We then fine-tune the pre-trained model on the hold-out graphs for fewer than 50 steps, which takes less than one minute. We name this GDP-generalization+finetune. Figure 2 shows that GDP fine-tuning for hold-out graphs outperforms human expert placement and HDP consistently on all six batch training datasets, and performs only slightly worse than GDP-one. 2-layer RNNLM and 2-stack WaveNet almost match the performance of GDP-one. We also run inference (generate placement) directly on the pre-trained model for the target hold-out graphs, and name this GDP-generalization-zeroshot. We find that GDP-generalization-zeroshot only marginally hurts performance as compared to GDPgeneralization+finetune, while being slightly better than human placement and HDP. This indicates that both graph embedding and the learned policies transfer and generalize to the unseen data.\nComparisons with other generalized placement approaches: Placeto (Addanki et al., 2019), to our knowledge, is the only other method beside GDP that shows true (and non-simulated) generalized device placement results. Direct comparison is not possible since Placeto uses a different hardware platform and different input graphs (Inception-V3, NMT, and NASNet). Placeto’s search time is on average 2.65x faster than HDP, while GDP is on average 15x faster than HDP on our larger set of graphs. Apart from search time speed up, Placeto on average reduces placed graph run time by 3% (for its different graphs and hardware) while GDP on average reduces placed graph run time by 9.2%, compared to HDP. One advantage of GDP over Placeto is that it does not rely on any initial feasible placement. Providing a reasonable initial placement is often non-trivial for domain experts, especially for larger graphs such as 8-layer GNMT. As such, we are the first to report superhuman results on 8-layer GNMT (with GDP-batch)." }, { "heading": "4.4 ABLATION STUDIES", "text": "Attention and Superposition. We did an ablation study on the attention and the superposition layer in the transformer-XL placer network. We find that attention improves placement run time by an average of 18% compared to a placer network with no attention, and superposition improves placement run time by an average of 6.5% where all the graphs are trained in a single batch as described in Section 4.3. Without superposition network, batch training fails for AmoebaNet and Inception when mixing with larger RNNLM or GNMT models (4-layer).\nPre-training graph embeddings. We also evaluate a fine-tuning strategy by pre-training the graph embedding and placement network and fine-tuning the network on the down stream tasks. The difference here compared to Section 4.3 is that we also include the target graphs in the pre-training\ndataset. When GDP-batch is used as a pre-training strategy, the graph embedding and placement network assimilate meaningful graph representations and placement policies from a wide set of graphs, thus can be used as a strong baseline network for fine-tuning on downstream tasks. We compare the generated placement run time and the placement search time, normalized to GDP-one. We find that fine-tuning further reduces the the placed graph run time by an average of 5% and placement search time by an average of 86%, compared to GDP-one." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present a generalized device placement strategy that uses a graph neural network and super-positioning to generalize to arbitrary and held out graphs. Through experimental evaluation over a wide set of representative graphs from different domains including computer vision, speech, and NLP, we demonstrated over 15 times faster convergence while achieving a 16% and 9.2% reductions in step time over human expert placement and HDP, respectively." }, { "heading": "ACKNOWLEDGMENTS", "text": "TBD" }, { "heading": "6 APPENDIX", "text": "" }, { "heading": "6.1 PROXIMAL POLICY OPTIMIZATION", "text": "In device placement, the objective is to minimize the training step time of a given computational graph or a batch of dataflow graphs for a target system configuration (e.g. a 8-GPU cluster or a TPU pod), by placing operations onto different devices to enable model-level parallelism. This process corresponds to maximizing the expected performance in the MDP. For better sample efficiency, we adopted a Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. The objective is to maximize a surrogate objective:\nLπ = Ea[0:n]∼π[ q′(an|sn) q(an|sn) Aπ(sn, an)]\nLπ = max π′\n1\nN N−1∑ n=0,an∼π [min( q′(an|sn) q(an|sn) (R−R), clip(q ′(an|sn) q(an|sn) , 1− , 1 + )(R−R))]\nWithin a loop, GDP PPO continuously samples placements from the distribution and evaluates their training times in real systems. For a rollout of K, we perform a minimatch of m stochastic gradient ascent steps with respective to the objective of proximal policy optimization, which makes incremental policy improvements. The rollout steps K and minibatch size m are hyper parameters for PPO. We find a set of optimized hyper parameters and keep them fixed for all the experiments presented. As the rewards are generated on-the-fly based on real system measurements, we no longer need to re-evaluate the placement solutions in a separate phase." }, { "heading": "6.2 HYPERPARAMETERS", "text": "In this section, we list out all the selected hyperparameters in our experiments for reproducibility in Table 3 and Table 4." }, { "heading": "6.3 INPUT GRAPHS", "text": "We used a variety of widely used workloads from computer vision, speech, and NLP. In this section, we give a detailed explanation on the selected models and hyperparameters." }, { "heading": "6.3.1 INCEPTION-V3", "text": "Inception-V3 (Szegedy et al., 2015) is a multi-branch convolutional network used for a variery of computer vision tasks, including classification, recognition, or generation. The network consists of blocks made of multiple branches of concolutional and pooling operations. Within a block, the branches of ops can be executed in parallel. However, the model is mostly sequential as the outputs of each block are concatenated together to form the input to the next block. We use a batch size of 64. The Tensorflow graph of this model contains 24,713 operations." }, { "heading": "6.3.2 AMOEBANET", "text": "AmoebaNet (Real et al., 2018) is an automatically designed neural network that yields SoTA performance on ImageNet. Similar to Inception-V3, it contains Inception-like blocks called cells, which receives a direct input from the previous cell and a skip input from the cell before it. The network is made of redundant cells stacked together, therefore is more modular than Inception-V3. We use a batch size of 64. The Tensorflow graphs contains 9,430 operations." }, { "heading": "6.3.3 RNNLM", "text": "Recurrent Neural Network Language Model (Zaremba et al., 2014; Jozefowicz et al., 2016) is made of many LSTM cells organized in a grid structure. The processing of each LSTM cell only depends on the results of 2 other cells (from the previous layer, and from the previous time step), which make the concurrent execution of many LSTM cells possible given enough hardware resources. We use batch size 64 and a hidden size of 2048. The corresponding TensorFlow graph contains 9,021 operations for a 2-layer model. The number of ops grow roughly proportional with the number of layers." }, { "heading": "6.3.4 GNMT", "text": "Neural Machine Translation with attention mechanism (Bahdanau et al., 2015; Wu et al., 2016) has an architecture similar to that of RNNLM, but its many hidden states make it far more computationally expensive than RNNLM. To reduce the training time, prior work (Wu et al., 2016) propose placing each LSTM layer, as well as the attention and the softmax layer, on a separate device. This strategy demonstrates early success in human placement, we show that GDP can find significantly better placements. We use batch size 64. The original 2-layer encoder-decoder consisting of 28,044 operations. An extended 4-layer version consisting of 46,600 operations, An even larger 8-layer version consisting of 83,712 operations." }, { "heading": "6.3.5 TRANSFORMER-XL", "text": "Transformer-XL (Dai et al., 2019) is an modified version of Transformer (Vaswani et al., 2017) that supports segement-level recurrence and a novel positional encoding scheme. This innovation enables learning dependency that is 80% longer than RNNs, and 450% longer than vanilla Transformers. We use a transformer-XL with batch size of 64, sequence length of 256, segment length of 64, model hidden dimension of 500 and feed forward hidden dimension of 1000, 10 heads, and head dimension of 50. The 2-layer Transformer-XL contains 2,618 operations. The number of ops grow roughly proportional with the number of layers." }, { "heading": "6.3.6 WAVENET", "text": "WaveNet (van den Oord et al., 2016) is a generative model for speech synthesis. The model is fully probabilistic and autoregressive, with the predictive ditribution for each audio sample conditioned on all previous ones. Architecturally, WaveNet uses causal convolutions with dilations, to obtain a large receptive field. We use a WaveNet model with batch size 64 and a receptive field size of 2048 (9-layers per stack). An 5-stack WaveNet contains 4,374 operations and a 10-stack WaveNet contains 8,516 operations." } ]
2,019
null
SP:caca11294236433df3e4a14e0ae263ef332372c9
[ "The paper modifies existing classifier architectures and training objective, in order to minimize \"conditional entropy bottleneck\" (CEB) objective, in attempts to force the representation to maximize the information bottleneck objective. Consequently, the paper claims that this CEB model improves general test accuracy and robustness against adversarial attacks and common corruptions, compared to the softmax + cross entropy counterpart. This claim is supported by experimental results on CIFAR-10 and ImageNet-C datasets.", "This paper studied the effectiveness of Conditional Entropy Bottleneck (CEB) on improving model robustness. Three tasks are considered to demonstrate its effectiveness; generalization performance over clean test images, adversarially perturbed images, and images corrupted by various synthetic noises. The experiment results demonstrated that CEB improves the model robustness on all considered tasks over the deterministic baseline and adversarially-trained classifiers. " ]
We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the IMAGENET-C Common Corruptions Benchmark, IMAGENET-A, and PGD attacks.
[]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep Variational Information Bottleneck", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Shumeet Baluja", "Ian Fischer" ], "title": "Adversarial transformation networks: Learning to generate adversarial examples", "venue": "arXiv preprint arXiv:1703.09387,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "arXiv preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "Logan Engstrom", "Justin Gilmer", "Gabriel Goh", "Dan Hendrycks", "Andrew Ilyas", "Aleksander Madry", "Reiichiro Nakano", "Preetum Nakkiran", "Shibani Santurkar", "Brandon Tran", "Dimitris Tsipras", "Eric Wallace" ], "title": "A discussion of ’adversarial examples are not bugs, they are features", "venue": "Distill,", "year": 2019 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning models", "venue": "arXiv preprint arXiv:1707.08945,", "year": 2017 }, { "authors": [ "Ian Fischer" ], "title": "The Conditional Entropy Bottleneck", "venue": "Open Review,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In CoRR,", "year": 2015 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": null, "year": 1905 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin D. Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch gaussian augmentation", "venue": null, "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do cifar-10 classifiers generalize to cifar-10", "venue": "arXiv preprint arXiv:1806.00451,", "year": 2018 }, { "authors": [ "C. Szegedy", "W. Zaremba", "I. Sutskever", "J. Bruna", "D. Erhan", "I. Goodfellow", "R. Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv: 1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "arXiv preprint arXiv:1805.12152,", "year": 2018 }, { "authors": [ "Tailin Wu", "Ian Fischer", "Isaac Chuang", "Max Tegmark" ], "title": "Learnability for the information bottleneck", "venue": "Uncertainty in AI,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv: 1708.07747,", "year": 2017 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan L Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jonathon Shlens", "Ekin D. Cubuk", "Justin Gilmer" ], "title": "A fourier perspective on model robustness in computer", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "We aim to make models that make meaningful predictions beyond the data they were trained on. Generally we want our models to be robust. Broadly, robustness is the ability of a model to continue making valid predictions as the distribution the model is tested on moves away from the empirical training set distribution. The most commonly reported robustness metric is simply test set performance, where we verify that our model continues to make valid predictions on what we hope represents valid draws from the exact same data generating procedure.\nAdversarial robustness tests robustness in a worst case setting, where an attacker (Szegedy et al., 2013) makes limited targeted modifications to the input that are as fooling as possible. Many adversarial attacks have been proposed and studied (Szegedy et al., 2013; Carlini & Wagner, 2017b;a; Kurakin et al., 2016a; Madry et al., 2017). Most machine-learned systems are currently believed to be vulnerable to adversarial examples. Many defenses have been proposed, but very few have demonstrated robustness against a powerful, general-purpose adversary (Carlini & Wagner, 2017a; Athalye et al., 2018). While robustness to adversarial attacks continues to attract interest, recent discussions have emphasized the need to consider other forms of robustness as well (Engstrom et al., 2019). The Common Corruptions Benchmark (Hendrycks & Dietterich, 2019) measures image models robustness to more mild but real world sorts of perturbations. Even these modest perturbations can be very fooling for traditional architectures.\nOne of the few general purpose strategies that demonstrably improves model robustness is Data Augmentation (Cubuk et al., 2018; Lopes et al., 2019; Yin et al., 2019). However, it would be nice to identify loss-based solutions that can work in tandem with the data augmentation approaches. Intuitively, by performing modifications of the inputs at training time, the model is prevented from being too sensitive to particular features of the inputs that don’t survive the augmentation procedure.\nAlternatively, we can try to make our models more robust by making them less sensitive to the inputs in the first place. The goal of this work is to experimentally investigate whether, by systematically limiting the complexity of the extracted representation using the Conditional Entropy Bottleneck (CEB), we can make our models more robust in all three of these senses: test set generalization (e.g., classification accuracy on “clean” test inputs), worst-case robustness, and typical-case robustness." }, { "heading": "1.1 CONTRIBUTIONS", "text": "This paper is primarily empirical. We demonstrate:\n• CEB models are easy to implement and train. • CEB models demonstrate improved generalization performance over deterministic base-\nlines on CIFAR-10 and ImageNet. • CEB models show improved robustness to adversarial attacks on CIFAR-10. • CEB models show improved robustness on the IMAGENET-C Common Corruptions\nBenchmark, the IMAGENET-A Benchmark, and targeted PGD attacks.\nAdditionally, we show that adversarially-trained models fail to generalize to attacks they weren’t trained on, by comparing the results on L2 PGD attacks from Madry et al. (2017) to our results on the same baseline architecture. This result underscores the importance of finding ways to make models robust that do not rely on knowing the form of the attack ahead of time." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 INFORMATION BOTTLENECKS", "text": "The Information Bottleneck (IB) objective (Tishby et al., 2000) aims to learn a stochastic representation Z ∼ p(z|x) that retains as much information about a target variable Y while being as compressed as possible. The objective:1\nmax I(Z;Y )− σ(−ρ)I(Z;X), (1)\nuses a Lagrange multiplier σ(−ρ) to trade off between the relevant information (I(Z;Y )) and complexity of the representation (I(Z;X)). Because Z depends only on X (Z ← X ↔ Y ): Z and Y are conditionally independent given Z:\nI(Z;X,Y ) = I(Z;X) + I(Z;Y |X) = I(Z;Y ) + I(Z;X|Y ). (2)\nThis allows us to write the information bottleneck of Equation (1) in an equivalent form:\nmax I(Z;Y )− e−ρI(Z;X|Y ). (3)\nJust as the original Information Bottleneck objective (Equation (1)) admits a natural variational lower bound (Alemi et al., 2017), so does this form. We can variationally lower bound the mutual information between our representation and the targets with a variational decoder q(y|z):\nI(Z;Y ) = Ep(x,y)p(z|x) [ log\np(y|z) p(y)\n] ≥ H(Y ) + Ep(x,y)p(z|x) [log q(y|z)] . (4)\nWhile we may not know H(Y ) exactly for real world datasets, in the information bottleneck formulation it is a constant outside of our control and so can be dropped in our objective. We can variationally upper bound our residual information:\nI(Z;X|Y ) = Ep(x,y)p(z|x) [ log\np(z|x, y) p(z|y)\n] ≤ Ep(x,y)p(z|x) [ log\np(z|x) q(z|y)\n] , (5)\nwith a variational class conditional marginal q(z|y) that approximates ∫ dx p(z|x)p(x|y). Putting both bounds together gives us the Conditional Entropy Bottleneck objective (Fischer, 2018):\nmin p(z|x)\nEp(x,y)p(z|x) [ log q(y|z)− e−ρ log p(z|x)\nq(z|y)\n] (6)\nCompare this with the Variational Information Bottleneck (VIB) objective (Alemi et al., 2017):\nmin p(z|x)\nEp(x,y)p(z|x) [ log q(y|z)− σ(−ρ) log p(z|x)\nq(z)\n] . (7)\nThe difference between CEB and VIB is the presence of a class conditional versus unconditional variational marginal. As can be seen in Equation (5): using an unconditional marginal provides a looser variational upper bound on I(Z;X|Y ). CEB (Equation (6)) can be thought of as a tighter variational approximation than VIB (Equation (7)) to Equation (3). Since Equation (3) is equivalent to the IB objective (Equation (1)), CEB can be thought of as a tighter variational approximation to the IB objective than VIB.\n1 The IB objective is ordinarily written with a Lagrange multiplier β ≡ σ(−ρ) with a natural range from 0 to 1. Here we use the sigmoid function: σ(−ρ) ≡ 1\n1+eρ to reparameterize in terms of a control parameter ρ on\nthe whole real line. As ρ→∞ the bottleneck turns off." }, { "heading": "2.2 IMPLEMENTING A CEB MODEL", "text": "In practice, turning an existing classifier architecture into a CEB model is very simple. For the stochastic representation p(z|x) we simply use the original architecture, replacing the final softmax layer with a dense layer with d outputs. These outputs are then used to specify the means of a d-dimensional Gaussian distribution with unit diagonal covariance. That is, to form the stochastic representation, independent standard normal noise is simply added to the output of the network (z = x + ). For every input, this stochastic encoder will generate a random d-dimensional output vector. For the variational classifier q(y|z) any classifier network can be used, including just a linear softmax classifier as done in these experiments. For the variational conditional marginal q(z|y) it helps to use the same distribution as output by the classifier. For the simple unit variance Gaussian encoding we used in these experiments, this requires learning just d parameters per class. For ease of implementation, this can be represented as single dense linear layer mapping from a one-hot representation of the labels to the d-dimensional output, interpreted as the mean of the corresponding class marginal.\nIn this setup the CEB loss takes a particularly simple form:\nE wy · (f(x) + )− log∑ y′ ewy′ ·(f(x)+ ) − e −ρ 2 (f(x)− µy) (f(x)− µy + 2 ) . (8) Here the first term is the usual softmax classifier loss, but acting on our stochastic representation z = f(x) + , which is simply the output of our encoder network f(x) with additive Gaussian noise. The wy is the yth row of weights in the final linear layer outputing the logits. µy are the learned class conditional means for our marginal. are standard normal draws from an isotropic unit variance Gaussian with the same dimension as our encoding f(x). The second term in the loss is a stochastic sampling of the KL divergence between our encoder likelihood and the class conditional marginal likelihood. ρ controls the strength of the bottleneck and can vary on the whole real line. As ρ → ∞ the bottleneck is turned off. In practice we find that ρ values near but above 0 tend to work best for modest size models, with the tendency for the best ρ to approach 0 as the model capacity increases. Notice that in expectation the second term in the loss is (f(x) − µy)2, which encourages the learned means µy to converge to the average of the representations of each element in the class. During testing we use the mean encodings and remove the stochasticity.\nIn its simplest form, CEB training a classifier amounts to injecting Gaussian random noise in the penultimate layer and learning estimates of the class averaged output of that layer with the stochastic regularization shown. In Appendix B we show simple modifications to the TPU-compatible ResNet implementation available on GitHub from the Google TensorFlow Team that produce the same core ResNet-50 models we use for our ImageNet experiments." }, { "heading": "2.3 ADVERSARIAL ATTACKS AND DEFENSES", "text": "Attacks. The first adversarial attacks were proposed in Szegedy et al. (2013); Goodfellow et al. (2015). Since those seminal works, an enormous variety of attacks has been proposed (Kurakin et al. (2016a;b); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b); Madry et al. (2017); Eykholt et al. (2017); Baluja & Fischer (2017), etc.). In this work, we will primarily consider the Projected Gradient Descent (PGD) attack (Madry et al., 2017), which is a multi-step variant of the early Fast Gradient Method (Goodfellow et al., 2015). The attack can be viewed as having four parameters: p, the norm of the attack (typically 2 or∞), , the radius the the p-norm ball within which the attack is permitted to make changes to an input, n, the number of gradient steps the adversary is permitted to take, and i, the per-step limit to modifications of the current input. In this work, we consider L2 and L∞ attacks of varying and n, and with i = 43 n .\nDefenses. A common defense for adversarial examples is adversarial training. Adversarial training was originally proposed in Szegedy et al. (2013), but was not practical until the Fast Gradient Method was introduced. It has been studied in detail, with varied techniques (Kurakin et al., 2016b; Madry et al., 2017; Ilyas et al., 2019; Xie et al., 2019). Adversarial training can clearly be viewed as a form of data augmentation (Tsipras et al., 2018), where instead of using some fixed set of functions to modify the training examples, we use the model itself in combination with one or more\nadversarial attacks to modify the training examples. As the model changes, the distribution of modifications changes as well. However, unlike with non-adversarial data augmentation techniques, such as AUTOAUG, adversarial training techniques considered in the literature so far cause substantial reductions in accuracy on clean test sets. For example, the CIFAR-10 model described in Madry et al. (2017) gets 95.5% accuracy when trained normally, but only 87.3% when trained on L∞ adversarial examples. More recently, Xie et al. (2019) adversarially trains ImageNet models with impressive robustness to targeted PGD L∞ attacks, but at only 62.32% accuracy on the non-adversarial test set, compared to 78.81% accuracy for the same model trained only on clean images." }, { "heading": "2.4 COMMON CORRUPTIONS", "text": "The Common Corruptions Benchmark (Hendrycks & Dietterich, 2019) offers a real world test of model robustness in light of common image processing pipeline corruptions. Figure 4 shows examples of the 15 corruptions present in the benchmark. IMAGENET-C is a modified test set of Imagenet images with the 15 corruptions applied at five different strengths. Within each corruption type we evaluated the average error at each of the five levels (Ec = 15 ∑5 s=1Ecs). To summarize the performance across all corruptions we report not only the average corruption error across all 15 tasks (avg = 115 ∑ cEc), but also the commonly reported Mean Corruption Error (mCE), which reweights the errors on each task according to the performance of a baseline ALEXNET model (Hendrycks & Dietterich, 2019):\nmCE = 1\n15 ∑ c ∑5 s=1Ecs∑5 s=1E ALEXNET cs . (9)\nThere are slightly different pipelines that have been used in the literature for the IMAGENET-C task (Lopes et al., 2019). In this work we used the ALEXNET normalization numbers and data formulation as in Yin et al. (2019)." }, { "heading": "2.5 NATURAL ADVERSARIAL EXAMPLES", "text": "The IMAGENET-A Benchmark (Hendrycks et al., 2019) is a dataset of 7,500 naturally-occurring “adversarial” examples across 200 ImageNet classes. The images exploit commonly-occurring weaknesses in ImageNet models, such as relying on textures often seen with certain class labels." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 FASHION-MNIST EXPERIMENTS", "text": "As a warm up, we consider Wide ResNet (Zagoruyko & Komodakis, 2016) models trained on Fashion MNIST (Xiao et al., 2017), and evaluated on targeted PGD L2 and L∞ attacks. All of the attacks are targeting the trouser class of Fashion MNIST, as that is the most distinctive class. Targeting a\nless distinctive class, such as one of the shirt classes, would confuse the difficulty of classifying the different shirts and the robustness of the model to adversaries. To measure robustness to the targeted attacks, we count the number of predictions that changed from a correct prediction on the clean image to an incorrect prediction of the target class on the adversarial image, and divide by the original number of correct predictions. Results are shown in Figure 1.\nIn this experiment we wanted to compare the performance of VIB, CEB, and a deterministic baseline. In Figure 1 (left) we see that both VIB and CEB have improved accuracy over the deterministic baseline. In order to compare the relative complexity of the learned representations for the two models, in the second panel we show the maximum lower bound seen during training on the rate: E [ log p(z|x)1\nK ∑K k p(z|xk)\n] ≤ I(Z;X) using the encoder’s minibatch marginal for both VIB and CEB.2\nThe two sets of models show nearly the same rate lower bound at each value of ρ.\nThe right two panels of Figure 1 show robustness to the targeted PGD L2 and L∞ attacks. Here CEB outperforms VIB. We also see for both models that as ρ decreases, the robustness to both attacks increases. In line with the proposed Minimum Necessary Information criterion from Fischer (2018), at ρ = 0 we end up with CEB models that have hit exactly 2.3 nats for the rate lower bound, have maintained high accuracy, and have strong robustness to both attacks. Moving to ρ = −1 gives only a small improvement to robustness, at the cost of a large decrease in accuracy." }, { "heading": "3.2 CIFAR-10 EXPERIMENTS", "text": "28×10 Wide ResNet Experiments We trained a set of 25 28×10 Wide ResNet (WRN) CEB models on CIFAR-10 at ρ ∈ [−1,−0.75, ..., 5], as well as a deterministic baseline. They trained for 1500 epochs, lowering the learning rate by a factor of 0.3 after 500, 1000, and 1250 epochs. This long training regime was due to our use of the original AUTOAUG policies, which requires longer training. The only additional modification we made to the basic 28×10 WRN architecture was the removal of all Batch Normalization (Ioffe & Szegedy, 2015) layers. Every small CIFAR-10 model we have trained with Batch Normalization enabled has had substantially worse robustness to L∞ PGD adversaries, even though typically the accuracy is much higher. For example, 28×10 WRN CEB models rarely exceeded more than 10% adversarial accuracy. However, it was always still the case that lower values of ρ gave higher robustness. As a baseline comparison, a deterministic 28×10 WRN with BatchNorm, trained with AUTOAUG reaches 97.3% accuracy on clean images, but 0% accuracy on L∞ PGD attacks at = 8 and n = 20. Interestingly, that model was noticeably more robust to L2 PGD attacks than the deterministic baseline without BatchNorm, getting 73% accuracy compared to 66%. However, it was still much weaker than the CEB models, which get over 80% accuracy on the same attack (Figure 2). Additional training details are in Appendix A.1.\nFigure 2 demonstrates the adversarial robustness of CEB models to both targeted L2 and L∞ attacks. The CEB models show a marked improvement in robustness to L2 attacks compared to an adversarially-trained baseline from Madry et al. (2017) (denoted Madry). Figure 3 shows the robustness of five of those models to PGD attacks as is varied. We selected the four CEB models to represent the most robust models across most of the range of ρ we trained. Note that of the 25 CEB models we trained, only the models with ρ ≥ 1 succesfully trained. The remainder collapsed to chance performance. This is something we observe on all datasets when training models that are too low capacity. Only by increasing model capacity does it become possible to train at low ρ. Note that this result is predicted by the theory of the onset of learning in the Information Bottleneck and its relationship to model capacity from Wu et al. (2019).\n62×7 Wide ResNet Experiments. In order to explore the effect of model size on training, and to train at lower ρ, we trained the largest Wide ResNet we could fit on a single GPU with a batch size of 250. This was a 62×7 model similar to the ones above, including the use of AUTOAUG, but we additionally enabled BatchNorm. We were able to train at ρ = 0 with this larger model, which reached 97.51% accuracy. This result is better than the 28×10 Wide ResNet from AUTOAUG by 0.19 percentage points, although it is still worse than the Shake-Drop model from that paper. We additionally tested the model on the new CIFAR-10.1 test set (Recht et al., 2018), getting accuracy of 93.6%. This is a gap of 3.9 percentage points, which is better than all of the results reported in that paper, and substantially better than the Wide ResNet results (but still inferior to the Shake-Drop\n2This lower bound on I(X;Z) is the “InfoNCE with a tractable encoder” bound from Poole et al. (2019).\nAUTOAUG results). The same model at ρ = 5 reached 97.05% accuracy on the normal test set and 91.9% on the CIFAR-10.1 test set, showing that increased ρ gave substantially worse generalization.\nTo test robustness of these models, we swept for both PGD attacks, which we show in Figure 3. The main result is that the 62× 7 CEB0 model not only has substantially higher accuracy than baseline Wide ResNets trained with AUTOAUG, it also beats the adversarially-trained model on both the L2 and the L∞ attacks at almost all values of . We also show that this model is even more robust to two transfer attacks, where we used the 62×7 CEB5 model and the adversariallytrained model to generate PGD attacks, and then test them on the CEB0 model. This result helps to counter possible claims that these models are doing “gradient masking” (the more compelling evidence against gradient masking is that the robustness of the model is strongly correlated with the hyperparameter ρ, whose only effect is to constrain the amount of information the model captures).\nWe additionally tested both models on the CIFAR-10 Common Corruptions test sets. At the time of training, we were unaware that AUTOAUG’s default policies for CIFAR-10 contain brightness and contrast augmentations that amount to training on those two corruptions from Common Corruptions (as mentioned in Yin et al. (2019)), so our results are not appropriate for direct comparison with other results in the literature. However, they still allow us to compare the effect of bottlenecking the information between these two large models. The ρ = 5 model reached an mCE3 of 61.2. The ρ = 0 model reached an mCE of 52.0, which is a dramatic relative improvement." }, { "heading": "3.3 IMAGENET EXPERIMENTS", "text": "To demonstrate CEB’s ability to improve robustness to real world data shifts, we trained six different types of networks on ImageNet at 224×224 resolution and two different sizes of RESNET, RESNET-50 and RESNET-152, and then tested them on IMAGENET-C, IMAGENET-A, and targeted PGD attacks. As a simple baseline we trained RESNET-50 networks with no data augmentation. We then trained the same networks but as CEB networks at ten different values of ρ = (1, 2, . . . , 10). AUTOAUG (Cubuk et al., 2018) has previously been demonstrated to improve robustness markedly on IMAGENET-C so next we trained our baseline RESNET-50 model with AUTOAUG. We similarly trained these AUTOAUG models as CEB models with ten different values of ρ. IMAGENET-C numbers are also sensitive to the model capacity. To assess whether CEB can benefit larger models we repeated the experiments with a modified RESNET-50 network where every layer was made twice as wide. Finally, we repeated the above six model types with RESNET-152 baselines and CEB models without AUTOAUG, with AUTOAUG, and with AUTOAUG and twice as wide. All other hyperparameters (learning rate schedule, L2 weight decay scale, etc.) remained the same across all models. In total we trained 66 ImageNet models – 6 deterministic baselines varying augmentation, width, and depth, and 60 CEB models additionally varying ρ. The results for the RESNET-50 models are summarized in Figure 4 and Table 1. For RESNET-152, see Figure 5 and Table 2.\nThe CEB models highlighted in Figures 4 and 5 and Tables 1 and 2 were selected by cross validation. These were values of ρ that gave the best clean test set accuracy. Despite being selected for classical generalization, these models also demonstrate a high degree of robustness on both average- and worst-case perturbations. In the case that more than one model gets the same test set accuracy, we choose the model with the lower ρ, since we know that lower ρ correlates with higher robustness. The only model where we had to make this decision was for RESNET-152 with AUTOAUG, where five models all were within 0.1% of each other, so we chose the ρ = 3 model, rather than ρ ∈ {5...8}.\nIMAGENET-C and IMAGENET-A. Both data augmentation and increasing model capacity have positive effects on robustness to both IMAGENET-C and IMAGENET-A, but for all three classes of models CEB gives substantial additional improvements.\nTargeted PGD Attacks. We tested on the random-target version of the PGD L2 and L∞ attacks (Kurakin et al., 2016a), both at = 16, n = 20, and i = 2, which is considered to be a strong attack still (Xie et al., 2019). Similar to our results on CIFAR-10, model capacity makes a substantial difference to whitebox adversarial attacks. In particular, none of the ResNet-50 models perform well, getting less than 1% top-1 accuracy. However, the Resnet-152 CEB models show a dramatic improvement over the deterministic baseline models, with top-1 accuracy increasing from\n3The mCE is computed relative to a baseline model. We use the baseline model from Yin et al. (2019).\n0.09% to 17.09% between the deterministic baseline and the ρ = 1 models without AUTOAUG, a relative increase of 187 times, and increases nearly as large for the AUTOAUG and wide AUTOAUG models. Interestingly, for the PGD attacks, AUTOAUG was detrimental – the RESNET-152 models without AUTOAUG were more robust than those with AUTOAUG. Only the wide RESNET-152 models with AUTOAUG exceeded the robustness of the narrow RESNET-152 without AUTOAUG." }, { "heading": "4 CONCLUSION", "text": "The Conditional Entropy Bottleneck (CEB) provides a simple mechanism to improve robustness of image classifiers. We have shown that CEB gives a tighter variational bound on the IB objective than the closely-related VIB, while also having consistently better test accuracy and robustness. We have shown a strong trend toward increased robustness as ρ decreases in the standard 28×10 Wide RESNET model on CIFAR-10, and that this increased robustness does not come at the expense of accuracy relative to the deterministic baseline. We have shown that CEB models at a range of ρ essentially dominate an adversarially-trained baseline model, even on the attack the adversarial model was trained on, and have incidentally shown that the adversarially-trained model generalizes to at least one other attack less well than a deterministic baseline. Finally, we have shown that on ImageNet, CEB provides substantial gains over deterministic baselines in both validation accuracy and robustness to Common Corruptions, Natural Adversarial Examples, and targeted Projected Gradient Descent attacks. We hope these empirical demonstrations inspire further theoretical and practical study of the use of bottlenecking techniques to encourage improvements to both classical generalization and robustness." }, { "heading": "A EXPERIMENT DETAILS", "text": "Here we present additional technical details for the CIFAR-10 and ImageNet experiments.\nA.1 CIFAR-10 EXPERIMENT DETAILS\nWe trained all of the models using Adam (Kingma & Ba, 2015) at a base learning rate of 10−3. We lowered the learning rate three times by a factor of 0.3 each time. The only additional trick to train the CIFAR-10 models was to start with ρ = 100, anneal down to ρ = 10 over 2 epochs, and then anneal to the target ρ over one epoch once training exceeded a threshold of 20%. This jump-start method is inspired by experiments on VIB in Wu et al. (2019). It makes it much easier to train models at low ρ, and appears to not negatively impact final performance.\nFor the 62×7 models, we used the data augmentation policies for CIFAR-10 found by AUTOAUG and trained the models for 800 epochs, lowering the learning rate by a factor of 10 at 400 and 600 epochs.\nA.2 IMAGENET EXPERIMENT DETAILS\nWe follow the learning rate schedule for the ResNet 50 from Cubuk et al. (2018), which has a top learning rate of 1.6, trains for 270 epochs, and drops the learning rate by a factor of 10 at 90, 180, and 240 epochs. The only difference for all of our models is that we train at a batch size of 8192 rather than 4096. Similar to the CIFAR-10 models, in order to ensure that the ImageNet models train at low ρ, we employ a simple jump-start. We start at ρ = 100 and anneal down to the target ρ over 12,000 steps. The first learning rate drop occurs a bit after 14,000 steps. Also similar to the CIFAR 28×10 WRN experiments, none of the models we trained at ρ = 0 succeeded, indicating that RESNET-50 and WRN 50×2 both have insufficient capacity to fully learn ImageNet. We were able to train RESNET-152 at ρ = 0, but only by disabling L2 weight decay and using a slightly lower learning rate. Since that involved additional hyperparameter tuning, we don’t report those results here, beyond noting that it is possible, and that those models reached top-1 accuracy around 72%." }, { "heading": "B CEB EXAMPLE CODE", "text": "Here we give the core changes needed to make ResNet CEB models, based on the TPU-compatible ResNet implementation from the Google TensorFlow Team.\n# In model.py: def resnet_v1_generator(block_fn, layers, num_classes, ...):\ndef model(inputs, is_training): # Build the ResNet model as normal up to the following lines: inputs = tf.reshape(\ninputs, [-1, 2048 if block_fn is bottleneck_block else 512]) # Now, instead of the final dense layer, just return inputs, # which for ResNet50 models is a [batch_size, 2048] tensor. return inputs\nListing 1: Modifications to the model.py function.\n# In resnet_main.py add the following imports and functions: import tensorflow_probability as tfp tfd = tfp.distributions\ndef ezx_dist(x): \"\"\"Builds the encoder distribution, e(z|x).\"\"\" dist = tfd.MultivariateNormalDiag(loc=x) return dist\ndef bzy_dist(y, num_classes=1000, z_dims=2048): \"\"\"Builds the backwards distribution, b(z|y).\"\"\" y_onehot = tf.one_hot(y, num_classes) mus = tf.layers.dense(y_onehot, z_dims, activation=None) dist = tfd.MultivariateNormalDiag(loc=mus) return dist\ndef cyz_dist(z, num_classes=1000): \"\"\"Builds the classifier distribution, c(y|z).\"\"\" # For the classifier, we are using exactly the same dense layer # initialization as was used for the final layer that we removed # from model.py. logits = tf.layers.dense(\nz, num_classes, activation=None, kernel_initializer=tf.random_normal_initializer(stddev=.01))\nreturn tfd.Categorical(logits=logits)\ndef lerp(global_step, start_step, end_step, start_val, end_val): \"\"\"Utility function to linearly interpolate two values.\"\"\" interp = (tf.cast(global_step - start_step, tf.float32)\n/ tf.cast(end_step - start_step, tf.float32)) interp = tf.maximum(0.0, tf.minimum(1.0, interp)) return start_val * (1.0 - interp) + end_val * interp\nListing 2: Modification to the head of resnet main.py.\n# Still in resnet_main.py, modify resnet_model_fn as follows: def resnet_model_fn(features, labels, mode, params): # Nothing changes until after the definition of build_network: def build_network(): # Elided, unchanged implementation of build_network.\nif params['precision'] == 'bfloat16': # build_network now returns the pre-logits, so we'll change # the variable name from logits to net. with tf.contrib.tpu.bfloat16_scope():\nnet = build_network() net = tf.cast(net, tf.float32)\nelif params['precision'] == 'float32': net = build_network()\n# Get the encoder, e(z|x): with tf.variable_scope('ezx', reuse=tf.AUTO_REUSE): ezx = ezx_dist(net) # Get the backwards encoder, b(z|y): with tf.variable_scope('bzy', reuse=tf.AUTO_REUSE): bzy = bzy_dist(labels)\n# Only sample z during training. Otherwise, just pass through # the mean value of the encoder. if mode == tf.estimator.ModeKeys.TRAIN: z = ezx.sample() else: z = ezx.mean()\n# Get the classifier, c(y|z): with tf.variable_scope('cyz', reuse=tf.AUTO_REUSE): cyz = cyz_dist(z, params)\n# cyz.logits is the same as what the unmodified ResNet model # would return. logits = cyz.logits\n# Compute the individual conditional entropies: hzx = -ezx.log_prob(z) # H(Z|X) hzy = -bzy.log_prob(z) # H(Z|Y) (upper bound) hyz = -cyz.log_prob(labels) # H(Y|Z) (upper bound)\n# I(X;Z|Y) = -H(Z|X) + H(Z|Y) # >= -hzx + hzy =: Rex, the residual information. rex = -hzx + hzy\nrho = 3.0 # You should make this a hyperparameter. rho_to_gamma = lambda rho: 1.0 / np.exp(rho) gamma = tf.cast(rho_to_gamma(rho), tf.float32)\n# Get the global step now, so that we can adjust rho dynamically. global_step = tf.train.get_global_step()\nanneal_rho = 12000 # You should make this a hyperparameter. if anneal_rho > 0: # Anneal rho from 100 down to the target rho # over the first anneal_rho steps. gamma = lerp(global_step, 0, aneal_rho,\nrho_to_gamma(100.0), gamma)\n# Replace all the softmax cross-entropy loss computation with # the following line: loss = tf.reduce_mean(gamma * rex + hyz) # The rest of resnet_model_fn can remain unchanged.\nListing 3: Modifications to resnet model fn in resnet main.py." } ]
2,019
CEB IMPROVES MODEL ROBUSTNESS
SP:50073cbe6ab4b44b3c68f141542c1e81df0c5f61
[ "This paper addresses the problem of representation learning for temporal graphs. That is, graphs where the topology can evolve over time. The contribution is a temporal graph attention (TGAT) layer aims to exploit learned temporal dynamics of graph evolution in tasks such as node classification and link prediction. This TGAT layer can work in an inductive manner unlike much prior work which is restricted to the transduction setting. Specifically, a temporal-kernel is introduced to generate time-related features, and incorporated into the self-attention mechanism. The results on some standard and new graph-structured benchmarks show improved performance vs a variety of baselines in both transduction and inductive settings. ", "This paper proposed the temporal graph attention layer which aggregates in-hop features with self-attention and incorporates temporal information with Fourier based relative positional encoding. This idea is novel in GCN field. Experimental results demonstrate that the TGAT which adds temporal encoding outperforms the other methods. Overall this paper addressed its core ideas clearly and made proper experiments and analysis to demonstrate the superiority against existing counterparts." ]
Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features as well as to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner’s theorem from harmonic analysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches.
[ { "affiliations": [], "name": "Da Xu" }, { "affiliations": [], "name": "Chuanwei Ruan" }, { "affiliations": [], "name": "Evren Korpeoglu" }, { "affiliations": [], "name": "Kannan Achan" } ]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Palash Goyal", "Nitin Kamra", "Xinran He", "Yan Liu" ], "title": "Dyngem: Deep embedding method for dynamic graphs", "venue": "arXiv preprint arXiv:1805.11273,", "year": 2018 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "William L Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "arXiv preprint arXiv:1709.05584,", "year": 2017 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "arXiv preprint arXiv:1506.05163,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "arXiv preprint arXiv:1611.07308,", "year": 2016 }, { "authors": [ "Taisong Li", "Jiawei Zhang", "S Yu Philip", "Yan Zhang", "Yonghong Yan" ], "title": "Deep dynamic network embedding for link prediction", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Lynn H Loomis" ], "title": "Introduction to abstract harmonic analysis", "venue": "Courier Corporation,", "year": 2013 }, { "authors": [ "Yao Ma", "Ziyi Guo", "Eric Zhao Zhaochun Ren", "Dawei Yin Jiliang Tang" ], "title": "Streaming graph neural networks", "venue": "arXiv preprint arXiv:1810.10627,", "year": 2018 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Giang Hoang Nguyen", "John Boaz Lee", "Ryan A Rossi", "Nesreen K Ahmed", "Eunyee Koh", "Sungchul Kim" ], "title": "Continuous-time dynamic network embeddings", "venue": "In Companion Proceedings of the The Web Conference", "year": 2018 }, { "authors": [ "James W Pennebaker", "Martha E Francis", "Roger J Booth" ], "title": "Linguistic inquiry and word count", "venue": "Liwc", "year": 2001 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Mahmudur Rahman", "Tanay Kumar Saha", "Mohammad Al Hasan", "Kevin S Xu", "Chandan K Reddy" ], "title": "Dylink2vec: Effective feature representation for link prediction in dynamic networks", "venue": "arXiv preprint arXiv:1804.05755,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Large-scale information network embedding", "venue": "In Proceedings of the 24th international conference on world wide web,", "year": 2015 }, { "authors": [ "Rakshit Trivedi", "Mehrdad Farajtabar", "Prasenjeet Biswal", "Hongyuan Zha" ], "title": "Representation learning over dynamic graphs", "venue": "arXiv preprint arXiv:1803.04051,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Jizhe Wang", "Pipei Huang", "Huan Zhao", "Zhibo Zhang", "Binqiang Zhao", "Dik Lun Lee" ], "title": "Billion-scale commodity embedding for e-commerce recommendation in alibaba", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E Sarma", "Michael M Bronstein", "Justin M Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "arXiv preprint arXiv:1801.07829,", "year": 2018 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Self-attention with functional time representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Kamiya Motwani", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Generative graph convolutional network for growing graphs", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Product knowledge graph embedding for e-commerce", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The technique of learning lower-dimensional vector embeddings on graphs have been widely applied to graph analysis tasks (Perozzi et al., 2014; Tang et al., 2015; Wang et al., 2016) and deployed in industrial systems (Ying et al., 2018; Wang et al., 2018a). Most of the graph representation learning approaches only accept static or non-temporal graphs as input, despite the fact that many graph-structured data are time-dependent. In social network, citation network, question answering forum and user-item interaction system, graphs are created as temporal interactions between nodes. Using the final state as a static portrait of the graph is reasonable in some cases, such as the proteinprotein interaction network, as long as node interactions are timeless in nature. Otherwise, ignoring the temporal information can severely diminish the modelling efforts and even causing questionable inference. For instance, models may mistakenly utilize future information for predicting past interactions during training and testing if the temporal constraints are disregarded. More importantly, the dynamic and evolving nature of many graph-related problems demand an explicitly modelling of the timeliness whenever nodes and edges are added, deleted or changed over time.\nLearning representations on temporal graphs is extremely challenging, and it is not until recently that several solutions are proposed (Nguyen et al., 2018; Li et al., 2018; Goyal et al., 2018; Trivedi et al., 2018). We conclude the challenges in three folds. Firstly, to model the temporal dynamics, node embeddings should not be only the projections of topological structures and node features but also functions of the continuous time. Therefore, in addition to the usual vector space, temporal representation learning should be operated in some functional space as well. Secondly, graph topological structures are no longer static since the nodes and edges are evolving over time, which poses\n∗Both authors contributed equally to this research.\ntemporal constraints on neighborhood aggregation methods. Thirdly, node features and topological structures can exhibit temporal patterns. For example, node interactions that took place long ago may have less impact on the current topological structure and thus the node embeddings. Also, some nodes may possess features that allows them having more regular or recurrent interactions with others. We provide sketched plots for visual illustration in Figure 1.\nSimilar to its non-temporal counterparts, in the real-world applications, models for representation learning on temporal graphs should be able to quickly generate embeddings whenever required, in an inductive fashion. GraphSAGE (Hamilton et al., 2017a) and graph attention network (GAT ) (Veličković et al., 2017) are capable of inductively generating embeddings for unseen nodes based on their features, however, they do not consider the temporal factors. Most of the temporal graph embedding methods can only handle transductive tasks, since they require re-training or the computationally-expensive gradient calculations to infer embeddings for unseen nodes or node embeddings for a new timepoint. In this work, we aim at developing an architecture to inductively learn representations for temporal graphs such that the time-aware embeddings (for unseen and observed nodes) can be obtained via a single network forward pass. The key to our approach is the combination of the self-attention mechanism (Vaswani et al., 2017) and a novel functional time encoding technique derived from the Bochner’s theorem from classical harmonic analysis (Loomis, 2013).\nThe motivation for adapting self-attention to inductive representation learning on temporal graphs is to identify and capture relevant pieces of the temporal neighborhood information. Both graph convolutional network (GCN ) (Kipf & Welling, 2016a) and GAT are implicitly or explicitly assigning different weights to neighboring nodes (Veličković et al., 2017) when aggregating node features. The self-attention mechanism was initially designed to recognize the relevant parts of input sequence in natural language processing. As a discrete-event sequence learning method, self-attention outputs a vector representation of the input sequence as a weighted sum of individual entry embeddings. Selfattention enjoys several advantages such as parallelized computation and interpretability (Vaswani et al., 2017). Since it captures sequential information only through the positional encoding, temporal features can not be handled. Therefore, we are motivated to replace positional encoding with some vector representation of time. Since time is a continuous variable, the mapping from the time domain to vector space has to be functional. We gain insights from harmonic analysis and propose a theoretical-grounded functional time encoding approach that is compatible with the self-attention mechanism. The temporal signals are then modelled by the interactions between the functional time encoding and nodes features as well as the graph topological structures.\nTo evaluate our approach, we consider future link prediction on the observed nodes as transductive learning task, and on the unseen nodes as inductive learning task. We also examine the dynamic node classification task using node embeddings (temporal versus non-temporal) as features to demonstrate the usefulness of our functional time encoding. We carry out extensive ablation studies and sensitivity analysis to show the effectiveness of the proposed functional time encoding and TGAT -layer." }, { "heading": "2 RELATED WORK", "text": "Graph representation learning. Spectral graph embedding models operate on the graph spectral domain by approximating, projecting or expanding the graph Laplacian (Kipf & Welling, 2016a; Henaff et al., 2015; Defferrard et al., 2016). Since their training and inference are conditioned on the specific graph spectrum, they are not directly extendable to temporal graphs. Non-spectral approaches, such as GAT, GraphSAGE and MoNET, (Monti et al., 2017) rely on the localized neighbourhood aggregations and thus are not restricted to the training graph. GraphSAGE and GAT also have the flexibility to handle evolving graphs inductively. To extend classical graph representation learning approaches to the temporal domain, several attempts have been done by cropping the temporal graph into a sequence of graph snapshots (Li et al., 2018; Goyal et al., 2018; Rahman et al., 2018; Xu et al., 2019b), and some others work with temporally persistent node (edges) (Trivedi et al., 2018; Ma et al., 2018). Nguyen et al. (2018) proposes a node embedding method based on temporal random walk and reported state-of-the-art performances. However, their approach only generates embeddings for the final state of temporal graph and can not directly apply to the inductive setting.\nSelf-attention mechanism. Self-attention mechanisms often have two components: the embedding layer and the attention layer. The embedding layer takes an ordered entity sequence as input. Selfattention uses the positional encoding, i.e. each position k is equipped with a vector pk (fixed or learnt) which is shared for all sequences. For the entity sequence e = (e1, . . . , el), the embedding layer takes the sum or concatenation of entity embeddings (or features) (z ∈ Rd) and their positional encodings as input:\nZe = [ ze1 + p1, . . . , ze1 + pl ]ᵀ ∈ Rl×d, or Ze = [ze1 ||p1, . . . , ze1 ||pl]ᵀ ∈ Rl×(d+dpos). (1) where || denotes concatenation operation and dpos is the dimension for positional encoding. Selfattention layers can be constructed using the scaled dot-product attention, which is defined as:\nAttn ( Q,K,V ) = softmax (QKᵀ√ d ) V, (2)\nwhere Q denotes the ’queries’, K the ’keys’ and V the ’values’. In Vaswani et al. (2017), they are treated as projections of the output Ze: Q = ZeWQ, K = ZeWK , V = ZeWV , where WQ, WK and WV are the projection matrices. Since each row of Q, K and V represents an entity, the dot-product attention takes a weighted sum of the entity ’values’ in V where the weights are given by the interactions of entity ’query-key’ pairs. The hidden representation for the entity sequence under the dot-product attention is then given by he = Attn(Q,K,V)." }, { "heading": "3 TEMPORAL GRAPH ATTENTION NETWORK ARCHITECTURE", "text": "We first derive the mapping from time domain to the continuous differentiable functional domain as the functional time encoding such that resulting formulation is compatible with self-attention mechanism as well as the backpropagation-based optimization frameworks. The same idea was explored in a concurrent work (Xu et al., 2019a). We then present the temporal graph attention layer and show how it can be naturally extended to incorporate the edge features." }, { "heading": "3.1 FUNCTIONAL TIME ENCODING", "text": "Recall that our starting point is to obtain a continuous functional mapping Φ : T → RdT from time domain to the dT -dimensional vector space to replace the positional encoding in (1). Without loss of generality, we assume that the time domain can be represented by the interval starting from origin: T = [0, tmax], where tmax is determined by the observed data. For the inner-product selfattention in (2), often the ’key’ and ’query’ matrices (K, Q) are given by identity or linear projection of Ze defined in (1), leading to terms that only involve inner-products between positional (time) encodings. Consider two time points t1, t2 and inner product between their functional encodings〈 Φ(t1),Φ(t2) 〉 . Usually, the relative timespan, rather than the absolute value of time, reveals critical temporal information. Therefore, we are more interested in learning patterns related to the timespan of |t2−t1|, which should be ideally expressed by 〈 Φ(t1),Φ(t2) 〉 to be compatible with self-attention.\nFormally, we define the temporal kernel K : T × T → R with K(t1, t2) := 〈 Φ(t1),Φ(t2) 〉 and K(t1, t2) = ψ(t1 − t2), ∀t1, t2 ∈ T for some ψ : [−tmax, tmax] → R. The temporal kernel is then\ntranslation-invariant, since K(t1 + c, t2 + c) = ψ(t1 − t2) = K(t1, t2) for any constant c. Generally speaking, functional learning is extremely complicated since it operates on infinite-dimensional spaces, but now we have transformed the problem into learning the temporal kernel K expressed by Φ. Nonetheless, we still need to figure out an explicit parameterization for Φ in order to conduct efficient gradient-based optimization. Classical harmonic analysis theory, i.e. the Bochner’s theorem, motivates our final solution. We point out that the temporal kernel K is positive-semidefinite (PSD) and continuous, since it is defined via Gram matrix and the mapping Φ is continuous. Therefore, the kernel K defined above satisfy the assumptions of the Bochner’s theorem, which we state below. Theorem 1 (Bochner’s Theorem). A continuous, translation-invariant kernel K(x,y) = ψ(x− y) on Rd is positive definite if and only if there exists a non-negative measure on R such that ψ is the Fourier transform of the measure.\nConsequently, when scaled properly, our temporal kernel K have the alternate expression: K(t1, t2) = ψ(t1, t2) = ∫ R eiω(t1−t2)p(ω)dω = Eω[ξω(t1)ξω(t2)∗], (3) where ξω(t) = eiωt. Since the kernel K and the probability measure p(ω) are real, we extract the real part of (3) and obtain:\nK(t1, t2) = Eω [ cos(ω(t1 − t2)) ] = Eω [ cos(ωt1) cos(ωt2) + sin(ωt1) sin(ωt2) ] . (4)\nThe above formulation suggests approximating the expectation by the Monte Carlo integral (Rahimi & Recht, 2008), i.e. K(t1, t2) ≈ 1d ∑d i=1 cos(ωit1) cos(ωit2) + sin(ωit1) sin(ωit2), with ω1, . . . , ωd i.i.d∼ p(ω). Therefore, we propose the finite dimensional functional mapping to Rd as:\nt 7→ Φd(t) := √ 1\nd\n[ cos(ω1t), sin(ω1t), . . . , cos(ωdt), sin(ωdt) ] , (5)\nand it is easy to show that 〈 Φd(t1),Φd(t2) 〉 ≈ K(t1, t2). As a matter of fact, we prove the stochastic\nuniform convergence of 〈 Φd(t1),Φd(t2) 〉 to the underlying K(t1, t2) and shows that it takes only a reasonable amount of samples to achieve proper estimation, which is stated in Claim 1. Claim 1. Let p(ω) be the corresponding probability measure stated in Bochner’s Theorem for kernel function K. Suppose the feature map Φ is constructed as described above using samples {ωi}di=1, then we only need d = Ω ( 1 2 log σ2ptmax ) samples to have\nsup t1,t2∈T ∣∣Φd(t1)′Φd(t2)−K(t1, t2)∣∣ < with any probability for ∀ > 0, where σ2p is the second momentum with respect to p(ω).\nThe proof is provided in supplement material.\nBy applying Bochner’s theorem, we convert the problem of kernel learning to distribution learning, i.e. estimating the p(ω) in Theorem 1. A straightforward solution is to apply the reparameterization trick by using auxiliary random variables with a known marginal distribution as in variational autoencoders (Kingma & Welling, 2013). However, the reparameterization trick is often limited to certain distributions such as the ’local-scale’ family, which may not be rich enough for our purpose. For instance, when p(ω) is multimodal it is difficult to reconstruct the underlying distribution via direct reparameterizations. An alternate approach is to use the inverse cumulative distribution function (CDF) transformation. Rezende & Mohamed (2015) propose using parameterized normalizing flow, i.e. a sequence of invertible transformation functions, to approximate arbitrarily complicated CDF and efficiently sample from it. Dinh et al. (2016) further considers stacking bijective transformations, known as affine coupling layer, to achieve more effective CDF estimation. The above methods learns the inverse CDF function F−1θ (.) parameterized by flow-based networks and draw samples from the corresponding distribution. On the other hand, if we consider an non-parameterized approach for estimating distribution, then learning F−1(.) and obtain d samples from it is equivalent to directly optimizing the {ω1, . . . , ωd} in (4) as free model parameters. In practice, we find these two approaches to have highly comparable performances (see supplement material). Therefore we focus on the non-parametric approach, since it is more parameter-efficient and has faster training speed (as no sampling during training is required).\nThe above functional time encoding is fully compatible with self-attention, thus they can replace the positional encodings in (1) and their parameters are jointly optimized as part of the whole model." }, { "heading": "3.2 TEMPORAL GRAPH ATTENTION LAYER", "text": "We use vi and xi ∈ Rd0 to denote node i and its raw node features. The proposed TGAT architecture depends solely on the temporal graph attention layer (TGAT layer). In analogy to GraphSAGE and GAT, the TGAT layer can be thought of as a local aggregation operator that takes the temporal neighborhood with their hidden representations (or features) as well as timestamps as input, and the output is the time-aware representation for target node at any time point t. We denote the hidden representation output for node i at time t from the lth layer as h̃(l)i (t).\nSimilar to GAT, we perform the masked self-attention to take account of the structural information (Veličković et al., 2017). For node v0 at time t, we consider its neighborhood N (v0; t) = {v1, . . . , vN} such that the interaction between v0 and vi ∈ N (v0; t), which takes place at time ti, is prior to t1. The input of TGAT layer is the neighborhood information Z ={ h̃\n(l−1) 1 (t1), . . . , h̃ (l−1) N (tN )\n} and the target node information with some time point ( h̃\n(l−1) 0 (t), t\n) .\nWhen l = 1, i.e. for the first layer, the inputs are just raw node features. The layer produces the time-aware representation of target node v0 at time t, denoted by h̃ (l) 0 (t), as its output. Due to the translation-invariant assumption for the temporal kernel, we can alternatively use {t−t1, . . . , t−tN} as interaction times, since |ti − tj | =\n∣∣(t− ti)− (t− tj)∣∣ and we only care for the timespan. In line with original self-attention mechanism, we first obtain the entity-temporal feature matrix as\nZ(t) = [ h̃\n(l−1) 0 (t)||ΦdT (0), h̃ (l−1) 1 (t1)||ΦdT (t− t1), . . . , h̃ (l−1) N (tN )||ΦdT (t− tN )\n]ᵀ (or use sum)\n(6) and forward it to three different linear projections to obtain the ’query’, ’key’ and ’value’:\nq(t) = [ Z(t) ] 0 WQ, K(t) = [ Z(t) ] 1:N WK , V(t) = [ Z(t) ] 1:N WV ,\nwhere WQ,WK ,WV ∈ R(d+dT )×dh are the weight matrices that are employed to capture the interactions between time encoding and node features. For notation simplicity, in the following discussion we treat the dependence of the intermediate outputs on target time t as implicit. The attention weights {αi}Ni=1 of the softmax function output in (2) is given by: αi = exp ( qᵀKi ) / (∑ q exp ( qᵀKq )) . The attention weight αi reveals how node i attends to the features of node v0 within the topological structure defined as N (v0; t) after accounting for their interaction time with v0. The self-attention therefore captures the temporal interactions with both node features and topological features and defines a local temporal aggregation operator on graph. The hidden representation for any node vi ∈ N (v0; t) is given by: αiVi. The mechanism can be effectively shared across all nodes for any time point. We then take the row-wise sum from the above dot-product self-attention output as the hidden neighborhood representations, i.e.\nh(t) = Attn ( q(t),K(t),V(t) ) ∈ Rdh .\n1Node vi may have multiple interactions with v0 at different time points. For the sake of presentation clarity, we do not explicitly differentiate such recurring interactions in our notations.\nTo combine neighbourhood representation with the target node features, we adopt the same practice from GraphSAGE and concatenate the neighbourhood representation with the target node’s feature vector z0. We then pass it to a feed-forward neural network to capture non-linear interactions between the features as in (Vaswani et al., 2017):\nh̃ (l) 0 (t) = FFN ( h(t)||x0 ) ≡ ReLU ( [h(t)||x0]W(l)0 + b (l) 0 ) W (l) 1 + b (l) 1 ,\nW (l) 0 ∈ R(dh+d0)×df ,W (l) 1 ∈ Rdf×d,b (l) 0 ∈ Rdf ,b (l) 1 ∈ Rd,\nwhere h̃(l)0 (t) ∈ Rd is the final output representing the time-aware node embedding at time t for the target node. Therefore, the TGAT layer can be implemented for node classification task using the semi-supervised learning framework proposed in Kipf & Welling (2016a) as well as the link prediction task with the encoder-decoder framework summarized by Hamilton et al. (2017b).\nVeličković et al. (2017) suggests that using multi-head attention improves performances and stabilizes training for GAT. For generalization purposes, we also show that the proposed TGAT layer can be easily extended to the multi-head setting. Consider the dot-product self-attention outputs from a total of k different heads, i.e. h(i) ≡ Attn(i) ( q(t),K(t),V(t) ) , i = 1, . . . , k. We first concatenate the k neighborhood representations into a combined vector and then carry out the same procedure:\nh̃ (l) 0 (t) = FFN ( h(1)(t)|| . . . ||h(k)(t)||x0 ) .\nJust like GraphSAGE, a single TGAT layer aggregates the localized one-hop neighborhood, and by stacking L TGAT layers the aggregation extends to L hops. Similar to GAT, out approach does not restrict the size of neighborhood. We provide a graphical illustration of our TGAT layer in Figure 2." }, { "heading": "3.3 EXTENSION TO INCORPORATE EDGE FEATURES", "text": "We show that the TGAT layer can be naturally extended to handle edge features in a messagepassing fashion. Simonovsky & Komodakis (2017) and Wang et al. (2018b) modify classical spectral-based graph convolutional networks to incorporate edge features. Battaglia et al. (2018) propose general graph neural network frameworks where edges features can be processed. For temporal graphs, we consider the general setting where each dynamic edge is associated with a feature vector, i.e. the interaction between vi and vj at time t induces the feature vector xi,j(t). To propagate edge features during the TGAT aggregation, we simply extend the Z(t) in (6) to:\nZ(t) = [ . . . , h̃\n(l−1) i (ti)||x0,i(ti)||ΦdT (t− ti), . . .\n] (or use summation), (7)\nsuch that the edge information is propagated to the target node’s hidden representation, and then passed on to the next layer (if exists). The remaining structures stay the same as in Section 3.2." }, { "heading": "3.4 TEMPORAL SUB-GRAPH BATCHING", "text": "Stacking L TGAT layers is equivalent to aggregate over the L-hop neighborhood. For each L-hop sub-graph that is constructed during the batch-wise training, all message passing directions must be aligned with the observed chronological orders. Unlike the non-temporal setting where each edge appears only once, in temporal graphs two node can have multiple interactions at different time points. Whether or not to allow loops that involve the target node should be judged case-bycase. Sampling from neighborhood, or known as neighborhood dropout, may speed up and stabilize model training. For temporal graphs, neighborhood dropout can be carried uniformly or weighted by the inverse timespan such that more recent interactions has higher probability of being sampled." }, { "heading": "3.5 COMPARISONS TO RELATED WORK", "text": "The functional time encoding technique and TGAT layer introduced in Section 3.1 and 3.2 solves several critical challenges, and the TGAT network intrinsically connects to several prior methods.\n• Instead of cropping temporal graphs into a sequence of snapshots or constructing timeconstraint random walks, which inspired most of the current temporal graph embedding methods, we directly learn the functional representation of time. The proposed approach is\nmotivated by and thus fully compatible with the well-established self-attention mechanism. Also, to the best of our knowledge, no previous work has discussed the temporal-feature interactions for temporal graphs, which is also considered in our approach.\n• The TGAT layer is computationally efficient compared with RNN-based models, since the masked self-attention operation is parallelizable, as suggested by Vaswani et al. (2017). The per-batch time complexity of the TGAT layer with k heads and l layers can be expressed as O ( (kÑ)l ) where Ñ is the average neighborhood size, which is comparable to GAT. When\nusing multi-head attention, the computation for each head can be parallelized as well.\n• The inference with TGAT is entirely inductive. With an explicit functional expression h̃(t) for each node, the time-aware node embeddings can be easily inferred for any timestamp via a single network forward pass. Similarity, whenever the graph is updated, the embeddings for both unseen and observed nodes can be quickly inferred in an inductive fashion similar to that of GraphSAGE, and the computations can be parallelized across all nodes.\n• GraphSAGE with mean pooling (Hamilton et al., 2017a) can be interpreted as a special case of the proposed method, where the temporal neighborhood is aggregated with equal attention coefficients. GAT is like the time-agnostic version of our approach but with a different formulation for self-attention, as they refer to the work of Bahdanau et al. (2014). We discuss the differences in detail in the Appendix. It is also straightforward to show our connections with the menory networks (Sukhbaatar et al., 2015) by thinking of the temporal neighborhoods as memory. The techniques developed in our work may also help adapting GAT and GraphSAGE to temporal settings as we show in our experiments." }, { "heading": "4 EXPERIMENT AND RESULTS", "text": "We test the performance of the proposed method against a variety of strong baselines (adapted for temporal settings when possible) and competing approaches, for both the inductive and transductive tasks on two benchmark and one large-scale industrial dataset." }, { "heading": "4.1 DATASETS", "text": "Real-world temporal graphs consist of time-sensitive node interactions, evolving node labels as well as new nodes and edges. We choose the following datasets which contain all scenarios.\nReddit dataset.2 We use the data from active users and their posts under subreddits, leading to a temporal graph with 11,000 nodes,∼700,000 temporal edges and dynamic labels indicating whether a user is banned from posting. The user posts are transformed into edge feature vectors.\nWikipedia dataset.3 We use the data from top edited pages and active users, yielding a temporal graph ∼9,300 nodes and around 160,000 temporal edges. Dynamic labels indicate if users are temporarily banned from editing. The user edits are also treated as edge features.\nIndustrial dataset. We choose 70,000 popular products and 100,000 active customers as nodes from the online grocery shopping website4 and use the customer-product purchase as temporal edges (∼2 million). The customers are tagged with labels indicating if they have a recent interest in dietary products. Product features are given by the pre-trained product embeddings (Xu et al., 2020).\nWe do the chronological train-validation-test split with 70%-15%-15% according to node interaction timestamps. The dataset and preprocessing details are provided in the supplement material." }, { "heading": "4.2 TRANSDUCTIVE AND INDUCTIVE LEARNING TASKS", "text": "Since the majority of temporal information is reflected via the timely interactions among nodes, we choose to use a more revealing link prediction setup for training. Node classification is then treated as the downstream task using the obtained time-aware node embeddings as input.\n2http://snap.stanford.edu/jodie/reddit.csv 3http://snap.stanford.edu/jodie/wikipedia.csv 4https://grocery.walmart.com/\nTransductive task examines embeddings of the nodes that have been observed in training, via the future link prediction task and the node classification. To avoid violating temporal constraints, we predict the links that strictly take place posterior to all observations in the training data.\nInductive task examines the inductive learning capability using the inferred representations of unseen nodes, by predicting the future links between unseen nodes and classify them based on their inferred embedding dynamically. We point out that it suffices to only consider the future sub-graph for unseen nodes since they are equivalent to new graphs under the non-temporal setting.\nAs for the evaluation metrics, in the link prediction tasks, we first sample an equal amount of negative node pairs to the positive links and then compute the average precision (AP) and classification accuracy. In the downstream node classification tasks, due to the label imbalance in the datasets, we employ the area under the ROC curve (AUC)." }, { "heading": "4.3 BASELINES", "text": "Transductive task: for link prediction of observed nodes, we choose the compare our approach with the state-of-the-art graph embedding methods: GAE and VGAE (Kipf & Welling, 2016b). For complete comparisons, we also include the skip-gram-based node2vec (Grover & Leskovec, 2016) as well as the spectral-based DeepWalk model (Perozzi et al., 2014), using the same inner-product decoder as GAE for link prediction. The CDTNE model based on the temporal random walk has been reported with superior performance on transductive learning tasks (Nguyen et al., 2018), so we include CDTNE as the representative for temporal graph embedding approaches.\nInductive task: few approaches are capable of managing inductive learning on graphs even in the non-temporal setting. As a consequence, we choose GraphSAGE and GAT as baselines after adapting them to the temporal setting. In particular, we equip them with the same temporal sub-graph batching describe in Section 3.4 to maximize their usage on temporal information. Also, we implement the extended version for the baselines to include edge features in the same way as ours (in Section 3.3). We experiment on different aggregation functions for GraphSAGE, i.e. Graph-\nSAGE -mean, GraphSAGE -pool and GraphSAGE -LSTM. In accordance with the original work of Hamilton et al. (2017a), GraphSAGE -LSTM gives the best validation performance among the three approaches, which is reasonable under temporal setting since LSTM aggregation takes account of the sequential information. Therefore we report the results of GraphSAGE -LSTM.\nIn addition to the above baselines, we implement a version of TGAT with all temporal attention weights set to equal value (Const-TGAT ). Finally, to show that the superiority of our approach owes to both the time encoding and the network architecture, we experiment with the enhanced GAT and GraphSAGE -mean by concatenating the proposed time encoding to the original features during temporal aggregations (GAT+T and GraphSAGE+T )." }, { "heading": "4.4 EXPERIMENT SETUP", "text": "We use the time-sensitive link prediction loss function for training the l-layer TGAT network: ` = ∑\n(vi,vj ,tij)∈E\n− log ( σ ( − h̃li(tij)ᵀh̃lj(tij) )) −Q.Evq∼Pn(v) log ( σ ( h̃li(tij) ᵀh̃lq(tij) )) , (8)\nwhere the summation is over the observed edges on vi and vj that interact at time tij , and σ(.) is the sigmoid function, Q is the number of negative samples and Pn(v) is the negative sampling distribution over the node space. As for tuning hyper-parameters, we fix the node embedding dimension and the time encoding dimension to be the original feature dimension for simplicity, and then select the number of TGAT layers from {1,2,3}, the number of attention heads from {1,2,3,4,5}, according to the link prediction AP score in the validation dataset. Although our method does not put restriction on the neighborhood size during aggregations, to speed up training, specially when using the multi-hop aggregations, we use neighborhood dropout (selected among p ={0.1, 0.3, 0.5}) with the uniform sampling. During training, we use 0.0001 as learning rate for Reddit and Wikipedia dataset and 0.001 for the industrial dataset, with Glorot initialization and the Adam SGD optimizer. We do not experiment on applying regularization since our approach is parameter-efficient and only requires Ω ( (d + dT )dh + (dh + d0)df + dfd ) parameters for each attention head, which is independent of the graph and neighborhood size. Using two TGAT layers and two attention heads with dropout rate as 0.1 give the best validation performance. For inference, we inductively compute the embeddings for both the unseen and observed nodes at each time point that the graph evolves, or when the node labels are updated. We then use these embeddings as features for the future link prediction and dynamic node classifications with multilayer perceptron.\nWe further conduct ablation study to demonstrate the effectiveness of the proposed functional time encoding approach. We experiment on abandoning time encoding or replacing it with the original positional encoding (both fixed and learnt). We also compare the uniform neighborhood dropout to sampling with inverse timespan (where the recent edges are more likely to be sampled), which is provided in supplement material along with other implementation details and setups for baselines." }, { "heading": "4.5 RESULTS", "text": "The results in Table 1 and Table 2 demonstrates the state-of-the-art performances of our approach on both transductive and inductive learning tasks. In the inductive learning task, our TGAT network significantly improves upon the the upgraded GraphSAGE -LSTM and GAT in accuracy and average\nprecision by at least 5 % for both metrics, and in the transductive learning task TGAT consistently outperforms all baselines across datasets. While GAT+T and GraphSAGE+T slightly outperform or tie with GAT and GraphSAGE -LSTM, they are nevertheless outperformed by our approach. On one hand, the results suggest that the time encoding have potential to extend non-temporal graph representation learning methods to temporal settings. On the other, we note that the time encoding still works the best with our network architecture which is designed for temporal graphs. Overall, the results demonstrate the superiority of our approach in learning representations on temporal graphs over prior models. We also see the benefits from assigning temporal attention weights to neighboring nodes, where GAT significantly outperforms the Const-TGAT in all three tasks. The dynamic node classification outcome (in Table 3) further suggests the usefulness of our time-aware node embeddings for downstream tasks as they surpass all the baselines. The ablation study results of Figure 3 successfully reveals the effectiveness of the proposed functional time encoding approach in capturing temporal signals as it outperforms the positional encoding counterparts." }, { "heading": "4.6 ATTENTION ANALYSIS", "text": "To shed some insights into the temporal signals captured by the proposed TGAT, we analyze the pattern of the attention weights {αij(t)} as functions of both time t and node pairs (i, j) in the inference stage. Firstly, we analyze how the attention weights change with respect to the timespans of previous interactions, by plotting the attention weights { αjq(tij)|q ∈ N (vj ; tij) } ∪ { αik(tij)|k ∈\nN (vi; tij) }\nagainst the timespans {tij−tjq}∪{tij−tik}when predicting the link for (vi, vj , tij) ∈ E (Figure 4a). This gives us an empirical estimation on the α(∆t), where a smaller ∆t means a more recent interaction. Secondly, we analyze how the topological structures affect the attention weights as time elapses. Specifically, we focus on the topological structure of the recurring neighbours, by finding out what attention weights the model put on the neighbouring nodes with different number of reoccurrences. Since the functional forms of all {αij(.)} are fixed after training, we are able to feed in different target time t and then record their value on neighbouring nodes with different number of occurrences (Figure 4b). From Figure 4a we observe that TGAT captures the pattern of having less attention on more distant interactions in all three datasets. In Figure 4b, it is obvious that when predicting a more future interaction, TGAT will consider neighbouring nodes who have a higher number of occurrences of more importance. The patterns of the attention weights are meaningful, since the more recent and repeated actions often have larger influence on users’ future interests." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We introduce a novel time-aware graph attention network for inductive representation learning on temporal graphs. We adapt the self-attention mechanism to handle the continuous time by proposing a theoretically-grounded functional time encoding. Theoretical and experimental analysis demonstrate the effectiveness of our approach for capturing temporal-feature signals in terms of both node and topological features on temporal graphs. Self-attention mechanism often provides useful model interpretations (Vaswani et al., 2017), which is an important direction of our future work. Developing tools to visualize the evolving graph dynamics and temporal representations efficiently is another important direction for both research and application. Also, the functional time encoding technique has huge potential for adapting other deep learning methods to the temporal graph domain." }, { "heading": "A APPENDIX", "text": "A.1 PROOF FOR CLAIM 1\nProof. The proof is also shown in our concurrent work Xu et al. (2019a). We also provide it here for completeness. To prove the results in Claim 1, we alternatively show that under the same condition,\nPr (\nsup t1,t2∈T\n|ΦBd (t1) ′ ΦBd (t2)−K(t1, t2)| ≥ ) ≤ 4σp √ tmax exp (−d 2\n32\n) . (9)\nDefine the score S(t1, t2) = ΦBd (t1) ′ ΦBd (t2). The goal is to derive a uniform upper bound for s(t1, t2) − K(t1, t2). By assumption S(t1, t2) is an unbiased estimator for K(t1, t2), i.e. E[S(t1, t2)] = K(t1, t2). Due to the translation-invariant property of S and K, we let ∆(t) ≡ s(t1, t2) − K(t1, t2), where t ≡ t1 − t2 for all t1, t2 ∈ [0, tmax]. Also we define s(t1 − t2) := S(t1, t2). Therefore t ∈ [−tmax, tmax], and we use t ∈ T̃ as the shorthand notation. The LHS in (1) now becomes Pr ( supt∈T̃ |∆(t)| ≥ ) .\nNote that T̃ ⊆ ∪N−1i=0 Ti with Ti = [−tmax + 2itmax N ,−tmax + 2(i+1)tmax N ] for i = 1, . . . , N . So ∪N−1i=0 Ti is a finite cover of T̃ . Define ti = −tmax + (2i+1)tmax\nN , then for any t ∈ Ti, i = 1, . . . , N we have\n|∆(t)| = |∆(t)−∆(ti) + ∆(ti)| ≤ |∆(t)−∆(ti)|+ |∆(ti)| ≤ L∆|t− ti|+ |∆(ti)|\n≤ L∆ 2tmax N + |∆(ti)|,\n(10)\nwhere L∆ = maxt∈T̃ ‖∇∆(t)‖ (since ∆ is differentiable) with the maximum achieved at t∗. So we may bound the two events separately.\nFor |∆(ti)| we simply notice that trigeometric functions are bounded between [−1, 1], and therefore −1 ≤ ΦBd (t1) ′ ΦBd (t2) ≤ 1. The Hoeffding’s inequality for bounded random variables immediately gives us:\nPr ( |∆(ti)| >\n2\n) ≤ 2exp(−d 2\n16 ).\nSo applying the Hoeffding-type union bound to the finite cover gives\nPr(∪N−1i=0 |∆(ti)| ≥ 2 ) ≤ 2N exp(−d\n2\n16 ) (11)\nFor the other event we first apply Markov inequality and obtain:\nPr ( L∆\n2tmax N ≥ 2\n) = Pr ( L∆ ≥ N\n4tmax\n) ≤ 4tmaxE[L 2 ∆]\nN . (12)\nAlso, since E[s(t1 − t2)] = ψ(t1 − t2), we have\nE[L2∆] = E‖∇s(t∗)−∇ψ(t∗)‖2 = E‖∇s(t∗)‖2 − E‖∇ψ(t∗)‖2 ≤ E‖∇s(t∗)‖2 = σ2p, (13)\nwhere σ2p is the second momentum with respect to p(ω).\nCombining (11), (12) and (11) gives us:\nPr (\nsup t∈T̃ |∆(t)| ≥\n) ≤ 2N exp(−d 2\n16 ) +\n4tmaxσ 2 p\nN . (14)\nIt is straightforward to examine that the RHS of (14) is a convex function of N and is minimized by\nN∗ = σp √ 2tmax exp( d 2 32 ). Plug N ∗ back to (14) and we obtain (9). We then solve for d according to (9) and obtain the results in Claim 1.\nA.2 COMPARISONS BETWEEN THE ATTENTION MECHANISM OF TGAT AND GAT\nIn this part, we provide detailed comparisons between the attention mechanism employed by our proposed TGAT and the GAT proposed by Veličković et al. (2017). Other than the obvious fact that GAT does not handle temporal information, the main difference lies in the formulation of attention weights. While GAT depends on the attention mechanism proposed by Bahdanau et al. (2014), our architecture refers to the self-attention mechanism of Vaswani et al. (2017). Firstly, the attention mechanism used by GAT does not involve the notions of ’query’, ’key’ and ’value’ nor the dotproduct formulation introduced in (2). As a consequence, the attention weight between node vi and its neighbor vj is computed via\nαij = exp\n( LeakyReLU ( aᵀ[Whi||Whj ] )) ∑ k∈N (vi) exp ( LeakyReLU ( aᵀ[Whi||Whk]\n)) , where a is a weight vector, W is a weight matrix, N (vi) is the neighorhood set for node vi and hi is the hidden representation of node vi. It is then obvious that their computation of αij is very different from our approach. In TGAT, after expanding the expressions in Section 3, the attention weight is computed by:\nαij(t) = exp\n(( [h̃i(ti)||ΦdT (t− ti)]WQ )ᵀ( [h̃j(tj)||ΦdT (t− tj)]WK )) ∑ k∈N (vi;t) exp (( [h̃i(ti)||ΦdT (t− ti)]WQ )ᵀ( [h̃k(tk)||ΦdT (t− tk)]WK\n)) . Intuitively speaking, the attention mechanism of GAT relies on the parameter vector a and the LeakyReLU(.) to capture the hidden factor interactions between entities in the sequence, while we use the linear transformation followed by the dot-product to capture pair-wise interactions of the hidden factors between entities and the time embeddings. The dot-product formulation is important for our approach. From the theoretical perspective, the time encoding functional form is derived according to the notion of temporal kernel K and its inner-product decomposition (Section 3). As for the practical performances, we see from Table 1, 2 and 3 that even after we equip GAT with the same time encoding, the performance is still inferior to our TGAT.\nA.3 DETAILS ON DATASETS AND PREPROCESSING\nReddit dataset: this benchmark dataset contains users interacting with subreddits by posting under the subreddits. The timestamps tell us when the user makes the posts. The dataset uses the posts made in a one-month span, and selects the most active users and subreddits as nodes, giving a total of 11,000 nodes and around 700,000 temporal edges. The user posts have textual features that are transformed into a 172-dimensional vector representing under the linguistic inquiry and word count (LIWC) categories (Pennebaker et al., 2001). The dynamic binary labels indicate if a user is banned from posting under a subreddit. Since node features are not provided in the original dataset, we use the all-zero vector instead.\nWikipedia dataset: the dataset also collects one-month of interactions induced by users’ editing the Wikipedia pages. The the top edited pages and active users are considered, leading to ∼9,300 nodes and around 160,000 temporal edges. Similar to the Reddit dataset, we also have the ground-truth dynamic labels on whether a user is banned from editing a Wikipedia page. User edits consist of the textual features and are also converted into 172-dimensional LIWC feature vectors. Node features are also not provided, so we also use the all-zero vector as well.\nIndustrial dataset: we obtain the large-scale customer-product interaction graph from the online grocery shopping platform grocery.walmart.com. We select ∼70,000 most popular products and 100,000 active customers as nodes and use the customer-product purchase interactions over a one-month period as temporal edges (∼2 million). Each purchase interaction is timestamped, which we use to construct the temporal graph. The customers are labelled with business tags, indicating if they are interested in dietary products according to their most recent purchase records. Each product node possesses contextual features containing their name, brand, categories and short description. The previous LIWC categories no longer apply since the product contextual features are not natural sentences. We use product embedding approach (Xu et al., 2020) to embed each product’s contextual\nfeatures into a 100-dimensional vector space as preprocessing. The user nodes and edges do not possess features.\nWe then split the temporal graphs chronologically into 70%-15%-15% for training, validation and testing according to the time epochs of edges, as illustrated in Figure 5 with the Reddit dataset. Since all three datasets have a relatively stationary edge count distribution over time, using the 70 and 85 percentile time points to split the dataset results in approximately 70%-15%-15% of total edges, as suggested by Figure 5.\nTo ensure that an appropriate amount of future edges among the unseen nodes will show up during validation and testing, for each dataset, we randomly sample 10% of nodes, mask them during training and treat them as unseen nodes by only considering their interactions in validation and testing period. This manipulation is necessary since the new nodes that show up during validation and testing period may not have much interaction among themselves. The statistics for the three datasets are summarized in Table 4.\nPreprocessing.\nFor the Node2vec and DeepWalk baselines who only take static graphs as input, the graph is constructed using all edges in training data regardless of temporal information. For DeepWalk, we treat\nthe recurrent edges as appearing only once, so the graph is unweighted. Although our approach handles both directed and undirected graphs, for the sake of training stability of the baselines, we treat the graphs as undirected. For Node2vec, we use the count of recurrent edges as their weights and construct the weighted graph. For all three datasets, the obtained graphs in both cases are undirected and do not have isolated nodes. Since we choose from active users and popular items, the graphs are all connected.\nFor the graph convolutional network baselines, i.e. GAE and VGAE, we construct the same undirected weighted graph as for Node2vec. Since GAE and VGAE do not take edge features as input, we use the posts/edits as user node features. For each user in Reddit and Wikipedia dataset, we take the average of their post/edit feature vectors as the node feature. For the industrial dataset where user features are not available, we use the all-zero feature vector instead.\nAs for the downstream dynamic node classification task, we use the same training, validation and testing dataset as above. Since we aim at predicting the dynamic node labels, for Reddit and Wikipedia dataset we predict if the user node is banned and for the industrial dataset we predict the customers’ business labels, at different time points. Due to the label imbalance, in each of the batch when training for the node label classifier, we conduct stratified sampling such that the label distributions are similar across batches.\nA.4 EXPERIMENT SETUP FOR BASELINES\nFor all baselines, we set the node embedding dimension to d = 100 to keep in accordance with our approach.\nTransductive baselines.\nSince Node2vec and DeepWalk do not provide room for task-specific manipulation or hacking, we do not modify their default loss function and input format. For both approaches, we select the number of walks among {60,80,100} and the walk-length among {20,30,40} according to the validation AP. Setting number of walks=80 and walk-length=30 give slightly better validation performance compared to others for both approaches. Notice that both Node2vec and DeepWalk use the sigmoid function with embedding inner-products as the decoder to predict neighborhood probabilities. So when predicting whether vi and vj will interact in the future, we use σ(−zᵀi zj) as the score, where zi and zj are the node embeddings. Notice that Node2vec has the extra hyper-parameter p and q which controls the likelihood of immediately revisiting a node in the walk and interpolation between breadth-first strategy and depth-first strategy. After selecting the optimal number of walks and walklength under p = 1 and q = 1, we further tune the different values of p in {0.2,0.4,0.6,0.8,1.0} while fixing q = 1. According to validation, p = 0.6 and 0.8 give comparable optimal performance.\nFor the GAE and VGAE baselines, we experiment on using one, two and three graph convolutional layers as the encoder (Kipf & Welling, 2016a) and use the ReLU(.) as the activation function. By referencing the official implementation, we also set the dimension of hidden layers to 200. Similar to previous findings, using two layers gives significant performances to using only one layer. Adding the third layer, on the other hand, shows almost identical results for both models. Therefore the results reported are based on two-layer GCN as the encoder. For GAE, we use the standard inner-product decoder as our approach and optimize over the reconstruction loss, and for VGAE, we restrict the Gaussian latent factor space (Kipf & Welling, 2016b). Since we have eliminated the temporal information when constructing the input, we find that the optimal hyper-parameters selected according to the tuning have similar patterns as in the previous non-temporal settings.\nFor the temporal network embedding model CTDNE, the walk length for the temporal random walk is also selected among {60,80,100}, where setting walk length to 80 gives slightly better validation outcome. The original paper considers several temporal edge selection (sampling) methods (uniform, linear and exponential) and finds uniform sampling with best performances (Nguyen et al., 2018). Since our setting is similar to theirs, we adopt the uniform sampling approach.\nInductive baselines.\nFor the GraphSAGE and GAT baselines, as mentioned before, we train the models in an identical way as our approach with the temporal subgraph batching, despite several slight differences. Firstly, the aggregation layers in GraphSAGE usually considers a fixed neighborhood size via sampling,\nwhereas our approach can take an arbitrary neighborhood as input. Therefore, we only consider the most recent dsample edges during each aggregation for all layers, and we find dsample = 20 gives the best performance among {10,15,20,25}. Secondly, GAT implements a uniform neighborhood dropout. We also experiment with the inverse timespan sampling for neighborhood dropout, and find that it gives slightly better performances but at the cost of computational efficiency, especially for large graphs. We consider aggregating over one, two and three-hop neighborhood for both GAT and GraphSAGE. When working with three hops, we only experiment on GraphSAGE with the mean pooling aggregation. In general, using two hops gives comparable performance to using three hops. Notice that computations with three-hop are costly, since the number of edges during aggregation increase exponentially to the number of hops. Thus we stick to using two hops for GraphSAGE, GAT and our approach. It is worth mentioning that when implementing GraphSAGE -LSTM, the input neighborhood sequences of LSTM are also ordered by their interaction time.\nNode classification with baselines.\nThe dynamic node classification with GraphSAGE and GAT can be conducted similarity to our approach, where we inductively compute the most up-to-date node embeddings and then input them as features to an MLP classifier. For the transductive baselines, it is not reasonable to predict the dynamic node labels with only the fixed node embeddings. Instead, we combine the node embedding with the other node embedding it is interacting with when the label changes, e.g. combine the user embedding with the Wikipedia page embedding that the user attempts on editing when the system bans the user. To combine the pair of node embeddings, we experimented on summation, concatenation and bi-linear transformation. Under summation and concatenation, the combined embeddings are then used as input to an MLP classifier, where the bi-linear transformation directly outputs scores for classification. The validation outcomes suggest that using concatenation with MLP yields the best performance.\nA.5 IMPLEMENTATION DETAILS\nTraining. We implement Node2vec using the official C code5 on a 16-core Linux server with 500 Gb memory. DeepWalk is implemented with the official python code6. We refer to the PyTorch geometric library for implementing the GAE and VGAE baselines (Fey & Lenssen, 2019). To accommodate the temporal setting and incorporate edges features, we develop off-the-shelf implementation for GraphSAGE and GAT in PyTorch by referencing their original implementations7 8. We also implement our model using PyTorch. All the deep learning models are trained on a machine with one Tesla V100 GPU. We use the Glorot initialization and the Adam SGD optimizer for all models, and apply the early-stopping strategy during training where we terminate the training process if the validation AP score does not improve for 10 epochs.\nDownstream node classification. As we discussed before, we use the three-layer MLP as classifier and the (combined) node embeddings as input features from all the experimented approaches, for all three datasets. The MLP is trained with the Glorot initialization and the Adam SGD optimizer in PyTorch as well. The `2 regularization parameter λ is selected in {0.001, 0.01, 0.05, 0.1, 0.2} case-by-case during training. The early-stopping strategy is also employed.\nA.6 SENSITIVITY ANALYSIS AND EXTRA ABLATION STUDY\nFirstly, we focus on the output node embedding dimension as well as the functional time encoding dimension in this sensitivity analysis. The reported results are averaged over five runs. We experiment on d ∈ {60, 80, 100, 120, 140} and dT ∈ {60, 80, 100, 120, 140}, and the results are reported in Figure 7a and 7c. The remaining model setups reported in Section 4.4 are untouched when varying d or dT . We observe slightly better outcome when increasing either d or dT on the industrial dataset. The patterns on Reddit and Wikipedia dataset are almost identical.\nSecondly, we compare between the two methods of learning functional encoding, i.e. using flowbased model or using the non-parametric method introduced in Section 3.1. We experiment on two\n5https://github.com/snap-stanford/snap/tree/master/examples/node2vec 6https://github.com/phanein/deepwalk 7https://github.com/williamleif/GraphSAGE 8https://github.com/PetarV-/GAT\nflow-based state-of-the-art CDF learning method: normalizing flow (Rezende & Mohamed, 2015) and RealNVP (Dinh et al., 2016). We use the default model setups and hyper-parameters in their reference implementations9 10. We provide the results in Figure 6b. As we mentioned before, using flow-based models leads to highly comparable outcomes as the non-parametric approach, but they require longer training time since they implement sampling during each training batch. However, it\n9https://github.com/ex4sperans/variational-inference-with-normalizing-flows 10https://github.com/chrischute/real-nvp\nis possible that carefully-tuned flow-based models can lead to nontrivial improvements, which we leave to the future work.\nFinally, we provide sensitivity analysis on the number of attention heads and layers for TGAT. Recall that by stacking two layers in TGAT we are aggregating information from the two-hop neighbourhood. For both accuracy and AP, using three-head attention and two-layers gives the best outcome. In general, the results are relatively stable to the number of heads, and stacking two layers leads to significant improvements compared with using only a single layer.\nThe ablation study for comparing between uniform neighborhood dropout and sampling with inverse timespan is given in Figure 6a. The two experiments are carried out under the same setting which we reported in Section 4.4. We see that using the inverse timespan sampling gives slightly worse performances. This is within expectation since uniform sampling has advantage in capturing the recurrent patterns, which can be important for predicting user actions. On the other hand, the results also suggest the effectiveness of the proposed time encoding for capturing such temporal patterns. Moreover, we point out that using the inverse timespan sampling slows down training, particularly for large graphs where a weighted sampling is conducted within a large number of nodes for each training batch construction. Nonetheless, inverse timespan sampling can help capturing the more recent interactions which may be more useful for certain tasks. Therefore, we suggest to choose the neighborhood dropout method according to the specific use cases." } ]
2,020
INDUCTIVE REPRESENTATION LEARNING ON TEMPORAL GRAPHS
SP:8361d709b85b1c717e2cf742dab0145fae667660
[ "This paper explores how graph neural networks can be applied to test satisfiability of 2QBF logical formulas. They show that a straightforward extension of a GNN-based SAT solver to 2QBF fails to outperform random chance, and argue that this is because proving either satisfiability or unsatisfiability of 2QBF requires reasoning over exponential sets of assignments. Instead, they show that GNNs can be useful as a heuristic candidate- or counterexample- ranking model which improves the efficiency of the CEGAR algorithm for solving 2QBF.", "This paper investigated the GNN-based solvers for the 2-Quantified Boolean Formula satisfiability problem. This paper points out that GNN has limitations in reasoning about unsatisfiability of SAT problems possibly due to the simple message-passing scheme. To extend the GNN-based SAT solvers to 2-QBF solvers, this paper then turns to learn GNN-based heuristics that work with traditional decision procedure, and proposes a CEGAR-based 2QBF algorithm." ]
It is valuable yet remains challenging to apply neural networks in logical reasoning tasks. Despite some successes witnessed in learning SAT (Boolean Satisfiability) solvers for propositional logic via Graph Neural Networks (GNN), there haven’t been any successes in learning solvers for more complex predicate logic. In this paper, we target the QBF (Quantified Boolean Formula) satisfiability problem, the complexity of which is in-between propositional logic and predicate logic, and investigate the feasibility of learning GNN-based solvers and GNN-based heuristics for the cases with a universal-existential quantifier alternation (so-called 2QBF problems). We conjecture, with empirical support, that GNNs have certain limitations in learning 2QBF solvers, primarily due to the inability to reason about a set of assignments. Then we show the potential of GNN-based heuristics in CEGAR-based solvers, and explore the interesting challenges to generalize them to larger problem instances. In summary, this paper provides a comprehensive surveying view of applying GNN-based embeddings to 2QBF problems, and aims to offer insights in applying machine learning tools to more complicated symbolic reasoning problems.
[]
[ { "authors": [ "Saeed Amizadeh", "Sergiy Matusevych", "Markus Weimer" ], "title": "Learning to solve circuit-SAT: An unsupervised differentiable approach", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hubie Chen", "Yannet Interian" ], "title": "A model for generating random quantified boolean formulas", "venue": "In IJCAI,", "year": 2005 }, { "authors": [ "Alonzo Church" ], "title": "A note on the entscheidungsproblem", "venue": "J. Symb. Log.,", "year": 1936 }, { "authors": [ "Stephen A. Cook" ], "title": "The complexity of theorem-proving procedures", "venue": "In Proceedings of the Third Annual ACM Symposium on Theory of Computing,", "year": 1971 }, { "authors": [ "Martin Davis", "George Logemann", "Donald W. Loveland" ], "title": "A machine program for theorem-proving", "venue": "Commun. ACM,", "year": 1962 }, { "authors": [ "Awni Y. Hannun", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Greg Diamos", "Erich Elsen", "Ryan Prenger", "Sanjeev Satheesh", "Shubho Sengupta", "Adam Coates", "Andrew Y. Ng" ], "title": "Deep speech: Scaling up end-to-end speech recognition", "venue": "CoRR, abs/1412.5567,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Mikolás Janota", "João P. Marques Silva" ], "title": "Abstraction-based algorithm for 2qbf", "venue": "In SAT,", "year": 2011 }, { "authors": [ "Hans Kleine Büning", "Uwe Bubeck" ], "title": "Theory of quantified boolean formulas", "venue": "In Handbook of Satisfiability,", "year": 2009 }, { "authors": [ "Gil Lederman", "Markus N. Rabe", "Sanjit A. Seshia" ], "title": "Learning heuristics for automated reasoning through deep reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "M. Mézard", "G. Parisi", "R. Zecchina" ], "title": "Analytic and algorithmic solution of random satisfiability problems", "venue": "Science, 297(5582):812–815,", "year": 2002 }, { "authors": [ "Alexander Nadel", "Vadim Ryvchin", "Ofer Strichman" ], "title": "Efficient MUS extraction with resolution", "venue": "In FMCAD,", "year": 2013 }, { "authors": [ "Rama Kumar Pasumarthi", "Sebastian Bruch", "Xuanhui Wang", "Cheng Li", "Michael Bendersky", "Marc Najork", "Jan Pfeifer", "Nadav Golbandi", "Rohan Anil", "Stephan Wolf" ], "title": "Tf-ranking: Scalable tensorflow library for learning-to-rank", "venue": "In KDD,", "year": 2019 }, { "authors": [ "Judea Pearl" ], "title": "Reverend bayes on inference engines: A distributed hierarchical approach", "venue": "In AAAI,", "year": 1982 }, { "authors": [ "David A. Plaisted", "Armin Biere", "Yunshan Zhu" ], "title": "A satisfiability procedure for quantified boolean formulae", "venue": "Discrete Applied Mathematics,", "year": 2003 }, { "authors": [ "Markus N. Rabe", "Leander Tentrup", "Cameron Rasmussen", "Sanjit A. Seshia" ], "title": "Understanding and extending incremental determinization for 2qbf", "venue": "In Hana Chockler and Georg Weissenbacher (eds.), Computer Aided Verification,", "year": 2018 }, { "authors": [ "Darsh Ranjan", "Daijue Tang", "Sharad Malik" ], "title": "A comparative study of 2qbf algorithms. In IN PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON THEORY AND APPLICATIONS OF SATISFIABILITY TESTING (SAT’04", "venue": null, "year": 2004 }, { "authors": [ "Jussi Rintanen" ], "title": "Constructing conditional plans by a theorem-prover", "venue": "J. Artif. Intell. Res.,", "year": 1999 }, { "authors": [ "Horst Samulowitz", "Roland Memisevic" ], "title": "Learning to solve QBF", "venue": "In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, July 22-26,", "year": 2007 }, { "authors": [ "Bart Selman", "Henry A. Kautz", "Bram Cohen" ], "title": "Local search strategies for satisfiability testing", "venue": "Computer Science,", "year": 1993 }, { "authors": [ "Daniel Selsam", "Nikolaj Bjørner" ], "title": "Guiding high-performance SAT solvers with unsat-core predictions", "venue": "In SAT,", "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L. Dill" ], "title": "Learning a SAT solver from single-bit supervision", "venue": "In ICLR (Poster). OpenReview.net,", "year": 2019 }, { "authors": [ "Mary Sheeran", "Satwinder Singh", "Gunnar Stålmarck" ], "title": "Checking safety properties using induction and a sat-solver", "venue": "In FMCAD,", "year": 2000 }, { "authors": [ "João P. Marques Silva", "Inês Lynce", "Sharad Malik" ], "title": "Conflict-driven clause learning SAT solvers", "venue": "In Handbook of Satisfiability,", "year": 2009 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy P. Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR. OpenReview.net,", "year": 2019 }, { "authors": [ "Pan Zhang", "Abolfazl Ramezanpour", "Lenka Zdeborová", "Riccardo Zecchina" ], "title": "Message passing for quantified boolean formulas", "venue": "CoRR, abs/1202.2536,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "As deep learning makes astonishing achievements in the domain of image (He et al., 2016) and audio (Hannun et al., 2014) processing, natural languages (Vaswani et al., 2017), and discrete heuristics decisions in games (Silver et al., 2017), there is a profound interest in applying the relevant techniques in the field of logical reasoning. Logical reasoning problems span from simple propositional logic to complex predicate logic and high-order logic, with known theoretical complexities from NP-complete (Cook, 1971) to semi-decidable and undecidable (Church, 1936). Testing the ability and limitation of machine learning tools on logical reasoning problems leads to a fundamental understanding of the boundary of learnability and robust AI, and addresses the interesting questions in decision procedures in logic, symbolic reasoning, and program analysis and verification as defined in the programming language community.\nThere have been some successes in learning propositional logic reasoning (Selsam et al., 2019; Amizadeh et al., 2019), which focus on SAT (Boolean Satisfiability) problems as defined below. A propositional logic formula is an expression composed of Boolean constants (> : true, ⊥ : false) , Boolean variables (xi), and propositional connectives such as ∧, ∨, ¬ (for example (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2)). The SAT problem asks if a given formula can be satisfied (evaluated to >) by assigning proper Boolean values to the variables. A crucial feature of the logical reasoning domain (as is visible in the SAT problem) is that the inputs are often structural, where logical connections between entities (variables in SAT problems) are the key information. Accordingly, previous successes have used GNN (Graph Neural Networks) and message-passing based embeddings to solve SAT problems.\nHowever, it should be noted that logical decision procedures is more complex that just reading the formulas correctly. It is unclear if GNN embeddings (via simple message-passing) contain all the information needed to reason about complex logical questions on top of the graph structures derived from the formulas, or whether the complex embedding schemes can be learned from backpropagation. Previous successes on SAT problems argued for the power of GNN, which can handle NP-complete problems (Selsam et al., 2019; Amizadeh et al., 2019), but no successes have been reported for solving semi-decidable predicate logic problems via GNN. In order to find out where the limitation of GNN\nis and why, in learning logical reasoning problems, we decide to look at problems with complexity inbetween SAT and predicate logic problems, for which QBF (Quantified Boolean Formula) problems serve as excellent middle steps. QBF is an extension of propositional formula, which allows quantifiers (∀ and ∃) over the Boolean variables (such as ∀x1∃x2. (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2)). In general, a quantified Boolean formula in prenex normal form can be expressed as such:\nQiXiQi−1Xi−1...Q0X0. φ\nwhere Qi are quantifiers that always differ from their neighboring quantifiers, Xi are disjoint sets of Boolean variables, and φ is a propositional formula with all Boolean variables bounded in Qi. Complexity-wise, QBF problems are PSPACE-complete (Kleine Büning & Bubeck, 2009), which lies in-between the NP-completeness of SAT problems and the semi-decidability of predicate logic problems. Furthermore, 2-QBF (QBF with only two alternative quantifiers) is ΣP2 -complete (Kleine Büning & Bubeck, 2009).\nAnother direction of addressing logical reasoning problems via machine learning is to learn heuristic decisions within traditional decision procedures. This direction is less appealing from a theoretical perspective, but more interesting from a practical one, since it has been shown to speed up SAT solvers in practical settings (Selsam & Bjørner, 2019). In this direction, there is less concern about the embedding power of GNN, but more about the design of the training procedures (what is the data and label for training) and how to incorporate the trained models within the decision procedures. The embeddings captured via GNN is rather preferred to be lossy to prevent overfitting (Selsam & Bjørner, 2019).\nIn this paper we explore the potential applications of GNNs to 2QBF problems. In Section 2, we illustrate our designs of GNN architectures for embedding 2QBF formulas. In Section 3, we evaluate GNN-based 2QBF solvers, and conjecture with empirical evidences that the current GNN techniques are unable to learn complete SAT solvers or 2QBF solvers. In Section 4, we demonstrate the potential of our GNN-based heuristics for selecting candidates and counter-examples in the CEGAR-based solver framework. In Section 5, we discuss the related work and conclude in Section 6. Throughout the paper we redirect details to supplementary materials.\nWe make the following contributions:\n1. Design and test possible GNN architectures for embedding 2QBF. 2. Pinpoint the limitation of GNN in learning logical decision procedures that need reasoning\nabout a space of Boolean assignments. 3. Learn GNN-based CEGAR solver heuristics via supervised learning and uncover interesting\nchallenges for GNN to generalize across graph structures." }, { "heading": "2 GNN EMBEDDING OF PROPOSITIONAL LOGICAL FORMULAS", "text": "Preliminary: Graph Neural Networks. GNNs refer to the neural architectures devised to learn the embeddings of nodes and graphs via message-passing. Resembling the generic definition in Xu et al. (2019), they consist of two successive operators to propagate the messages and evolve the embeddings over iterations:\nm(k)v =Aggregate (k) ( {h(k−1)u : u ∈ N (v)} ) , h(k)v =Combine (k) ( h(k−1)v ,m (k) v ) (1)\nwhere h(k)v denotes the hidden state (embedding) of node v at the kth layer/iteration, and N (v) denotes the neighbors of node v. In each iteration, the Aggregate(k)(·) aggregates hidden states from node v’s neighbors to produce the new message (i.e., m(k)v ) for node v, and Combine(k)(·, ·) computes the new embedding of v with its last state and its current message. After a specific number of iterations (e.g., K), the embeddings should capture the global relational information of the nodes, which can be fed into other neural network modules for specific tasks.\nGNN Architecture for Embedding SAT formulas. Previous success (Selsam et al., 2019) of GNN-based SAT solvers embedded SAT formulas like below. Each SAT formula is translated into a bipartite graph, where one kind of nodes represent all literals (Boolean variables and their negations, denoted as L) , and the other kind of nodes represent all clauses (sets of literals that are connected\nvia ∨, denoted as C). Edges between literal and clause nodes represent the literal appearing in that clause, and all edges are represented by a sparse adjacent matrix (EdgeMatrix (E)) of dimension |C| × |L|. There is also another kind of edges connecting literals with their negations. The graph representation of (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2) is given below as an example. Note that this architecture is specific for propositional formulas in Conjunctive Normal Form (CNF), which is composed of clauses connected via ∧.\nC1\nC2\nx1\nx2\n¬x1 ¬x2\nThe embeddings of literals and clauses are initialized with tiled random vectors. Then the GNN uses MLPs to compute the messages of literals and clauses from the embeddings, and LSTMs to update embeddings with the aggregated messages. The mathematical process for one iteration of message-passing is given below, where EmbL and EmbC denotes embedding matrices of literals and clauses respectively, MsgX→Y denotes messages from X to Y , MLPX denotes MLPs of X for generating messages from the embeddings, LSTMX denotes LSTMs of X for digesting incoming messages and updating the embeddings, and · T [] represent matrix multiplication, transposition, and concatenation respectively. Furthermore, Emb¬L denotes a permutational view of EmbL such that the same row of EmbL and Emb¬L are embeddings of a variable and its negation respectively.\nMsgL→C = E ·MLPL(EmbL) #aggregate clauses EmbC = LSTMC(EmbC ,MsgL→C) #combine clauses\nMsgC→L = E T ·MLPC(EmbC) #aggregate literals\nEmbL = LSTML(EmbL, [MsgC→L,Emb¬L]) #combine literals\n(2)\nNote that different instances of MLPs and LSTMs are used for clauses and literals (they have different subscripts). What’s more, Emb¬L is used as additional message when updating EmbL.\nGNN Architectures for Embedding 2QBF. The difference between SAT formulas and 2QBF is that in 2QBF the variables are quantified by ∀ or ∃. To reflect that difference in graph representation, we separate ∀-literals and ∃-literals into different groups of nodes. For example, the graph representation of ∀x1∃x2. (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2) is shown below:\nC1\nC2\nx1\n¬x1\nx2\n¬x2\nAccordingly, in GNN architectures, the separated ∀-literals and ∃-literals are embedded via different modules. The GNN architecture design closely resembles the design philosophy of Selsam et al. (2019) in terms of permutation invariance and negation invariance, and would most likely carry over the success of GNN in solving SAT problems to 2QBF problems.\nMsgL→C = [E∀ ·MLP∀(Emb∀),E∃ ·MLP∃(Emb∃)] #aggregate clauses EmbC = LSTMC(EmbC ,MsgL→C) #combine clauses\nMsgC→∀ = E T ∀ ·MLPC→∀(EmbC) #aggregate ∀\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀]) #combine ∀ MsgC→∃ = E T ∃ ·MLPC→∃(EmbC) #aggregate ∃\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃]) #combine ∃\n(3)\nNote that we use ∀ and ∃ to denote all ∀-literals and all ∃-literals respectively. We use EX to denote the EdgeMatrix between X and C, and MLPC→X to denote MLPs that generate MsgC→X . We de facto have tested more GNN architectures for 2QBF (see the supplementary material A.1), yet the model above performed the best in our later evaluation, so we used it in the main paper." }, { "heading": "3 GNN-BASED SOLVERS FAIL IN 2QBF PROBLEMS", "text": "In the previous section, we have discussed GNN-based embeddings in propositional logical formulas. We then test whether GNN-based 2QBF solvers can be learned, following the previous successes (Selsam et al., 2019; Amizadeh et al., 2019)." }, { "heading": "3.1 EMPIRICAL STUDY FOR REASONING 2QBF BY GNN", "text": "Data Preparation. In training and testing, we follow the previous work (Chen & Interian, 2005) to generate random 2QBF formulas of specs (2,3) and sizes (8,10). That is to say, each clause has 5 literals, 2 of them are randomly chosen from a set of 8 ∀-quantified variables, and 3 of them are randomly chosen from a set of 10 ∃-quantified variables. We modify the generation procedure so that it generates clauses until the formula becomes unsatisfiable. We then randomly negate an ∃-quantified literal per formula to get a very similar but satisfiable formula.\nPredicting Satisfiability. We first tested whether our graph embeddings can be used to predict satisfiability of 2QBF formulas. We extended the GNN architectures with a voting MLP (MLPvote) that takes the embeddings of the ∀-variables after the propagation, and uses the average votes as logits for satisfiability/unsatisfiability prediction:\nlogitssat = mean(MLPvote(Emb∀))\nWe trained our GNNs with different amount of data (40 pairs, 80 pairs, and 160 pairs of satisfiable/unsatisfiable formulas) and different numbers of message-passing iterations (8 iters, 16 iters, and 32 iters), and then evaluated the converged models on 600 pairs of new instances. We report the accuracy rate of unsatisfiable and satisfiable formulas as tuples for both the training dataset and the testing dataset. By alternating the random seeds, the models with the best training data performance are selected and shown in Table 1. Since pairs of satisfiable/unsatisfiable formulas are only different by one literal, it forces the GNNs to learn subtle structural differences in the formulas. The GNNs fit well to the smaller training dataset but have trouble for 160 pairs of formulas (numbers in the green color). Performances of the models on testing dataset are close to random guesses (numbers in the blue color), and running more iterations during testing does not help with their performances.\nPredicting Unsatisfiability Witnesses. Previous work (Selsam et al., 2019; Amizadeh et al., 2019) also showed successes in predicting satisfiable witnesses (variable assignments that satisfy the formulas) of SAT problems. 2QBF problems have unsatisfiable witnesses (assignments to ∀ variables that render the reduced propositional formulas unsatisfiable). Next, we test if we can train GNNs to predict unsatisfiable witnesses of 2QBF formulas. Specifically, the final embeddings of ∀-variables are transformed into logits via an assignment MLP (MLPasn) and then used to compute the cross-entropy loss with the actual unsatisfiability witnesses of the formula:\nlogitswitness = MLPasn(Emb∀)\nOnce again we tried different amount of training data (160, 320, and 640 unsatisfiable formulas) and different numbers of iterations (8 iters, 16 iters, and 32 iters), and then tested the converged models on 600 unsatisfiable new 2QBF formulas. We report the accuracy per variable and the accuracy per formula as tuples for both the training dataset and the testing dataset in Table 2, from which we can observe that the GNNs fit well to the training data (numbers in green color), especially with more message-passing iterations. However, the GNN performance on testing data is only slightly better than random guesses (numbers in blue color), and running more iterations during testing does not help with the performance either." }, { "heading": "3.2 WHY GNN-BASED 2QBF SOLVER FAILED", "text": "In contrast to our initial expectation, the results above clearly show that GNNs fail to learn a 2QBF solver, unlike the previous successes in solving SAT. To investigate what limits GNNs in 2QBF solver, we first backtrack one step and examine the performance of GNNs on SAT problems.\nDifficulty in Proving Unsatisfiability for SAT problems. Interestingly, previous works showed that GNN-based SAT solvers actually had trouble in predicting unsatisfiability with high confidence\nTable 1: GNN Performance to Predict SAT/UNSAT\nDATASET 40 PAIRS 80 PAIRS 160 PAIRS\n8 ITERS (0.98, 0.94) (1.00, 0.92) (0.84, 0.76) TESTING (0.40, 0.64) (0.50, 0.48) (0.50, 0.50)\n16 ITERS (1.00, 1.00) (0.96, 0.96) (0.88, 0.70) TESTING (0.54, 0.46) (0.52, 0.52) (0.54, 0.48)\n32 ITERS (1.00, 1.00) (0.98, 0.98) (0.84, 0.80) TESTING (0.32, 0.68) (0.52, 0.50) (0.52, 0.50)\nTable 2: GNN Performance to Predict Witnesses of UNSAT\nDATASET 160 UNSAT 320 UNSAT 640 UNSAT\n8 ITERS (1.00, 0.99) (0.95, 0.72) (0.82, 0.28) TESTING (0.64, 0.06) (0.67, 0.05) (0.69, 0.05)\n16 ITERS (1.00, 1.00) (0.98, 0.87) (0.95, 0.69) TESTING (0.64, 0.05) (0.65, 0.05) (0.65, 0.06)\n32 ITERS (1.00, 1.00) (0.99, 0.96) (0.91, 0.57) TESTING (0.63, 0.05) (0.64, 0.05) (0.63, 0.05)\n(Selsam et al., 2019), if those formulas do not have a small unsatisfiable core (minimal number of clauses that is enough to cause unsatisfiability). Another work (Amizadeh et al., 2019) even completely removed unsatisfiable formulas from the training dataset (since they slowed down the training process), and only trained for predicting solutions to satisfiable formulas. However, these defects are not a problem for SAT solvers, since predicting satisfiability with high confidence is already good enough for a binary distinction.\nThe difficulty in proving unsatisfiability is understandable, since constructing a proof of unsatisfiability demands a complete reasoning in the search space, which is more complex than constructing a proof of satisfiability that only requires a witness. Traditionally it relies on the recursive/iterative decision procedures that either traverse all possible assignments (implicitly or explicitly) to construct the proof (DPLL (Davis et al., 1962)), or generate extra constraints from assignment trials that lead to conflicts, until some of the constraints contradict each other (CDCL (Silva et al., 2009)). In comparison, the message-passing scheme in GNN doesn’t seem to resemble either of those procedures, but is rather similar to a subfamily of incomplete SAT solvers (WalkSAT (Selman et al., 1993)) that randomly assign variables and stochastically search for local witnesses. Similarly, those SAT solvers cannot prove unsatisfiability.\nGNN-based 2QBF Solver is Conjecturally Infeasible For 2QBF problems, constructions of both proofs (of satisfiability and unsatisfiability) need complete reasoning about a search space. Proof of satisfiability needs to show that for all possible assignments of ∀ variables, there exist satisfying assignments of ∃ variables, while proof of unsatisfiability needs to show that given a witness (assignment of ∀ variables), the reduced propositional formula is proven unsatisfiable. Given this, it’s reasonable to conjecture that GNNs are probably incapable of constructing either proofs, thus being unable to learn a 2QBF solver. Traditional decision procedures (such as CEGAR-based solvers (Rabe et al., 2018)) has a way to incrementally construct such proof, but it is unlikely that the message-passing scheme in GNNs is capable of such task. Here, we conjecture that learning complete SAT solvers and 2QBF solvers are infeasible with the current GNN architectures and message-passing schemes." }, { "heading": "4 LEARN GNN-BASED HEURISTICS FOR 2QBF", "text": "In Section 3, we conjecture (with empirical support) that GNN-based 2QBF solvers are infeasible. Thus the successes of learning GNN-based SAT solvers (Selsam et al., 2019; Amizadeh et al., 2019) cannot be simply extended to more complex logical reasoning problems. Therefore we pivot our attention to learning GNN-based heuristics that work with traditional decision procedures. Considerable QBF decision procedures have been proposed due to the importance of QBF solvers in fields such as conditional planning (Rintanen, 1999) and symbolic model checking (Plaisted et al., 2003). In this section, we will just focus on the CEGAR (Counter Example Guided Abstraction Refinement) based solving algorithm." }, { "heading": "4.1 CEGAR-BASED 2QBF ALGORITHM", "text": "We first present the CEGAR-based solving procedure (Janota & Silva, 2011) in Algorithm 1. Iteratively, the CEGAR algorithm proposes an assignment of all ∀ variables as a candidate, which reduces the 2QBF formula to a SAT formula. If the SAT formula is proven unsatisfiable, the candidate becomes a witness and the algorithm returns (unsat, witness). Otherwise, a satisfying assignment\nof ∃ variables can be found as a counter-example. Each counter-example disables a set of potential candidates, and this constraints on candidates can be expressed via accumulated clauses in the constraint SAT formula ω (details in supplementary material A.2). New candidates must be proposed from the satisfying solutions of ω, to avoid proposing candidates that are already countered (thus abstract refinement). As counter-examples add clauses to ω, ω may become unsatisfiable, meaning that no more candidates can be proposed. In that case, the algorithm returns (sat, -).\nAlgorithm 1 CEGAR-based 2QBF Algorithm Input: ∀X∃Y φ Output: (sat, -) or (unsat, witness) Initialize constraints ω as an empty set of clauses. while true do\n# proposing candidates (has-candidate, candidate) = SAT-solver(ω) if not has-candidate then\nreturn (sat, -) end if # proposing counter-examples (has-counter, counter) = SAT-solver(φ[X candidate]) if not has-counter then\nreturn (unsat, candidate) end if # abstract refinement # details in supplementary material A.2 add counter to constraints ω\nend while The algorithm is clearly exponential, since both of the search spaces (of the candidates and the counter-examples) are exponential. It is also intuitive that the quality of candidates and counterexamples affects the runtime of the algorithm. The traditional decision procedures have proposed a MaxSAT-based heuristics, which states that the good candidates should maximize the number of unsatisfied clauses in the formula (thus making the reduced SAT problem difficult), and the good counter-examples should maximize the number of satisfied clauses in the formula (thus providing a strong constraint on the candidates) (Janota & Silva, 2011). However, MaxSAT-based heuristics are not practical, due to the heavy overhead of MaxSAT procedures. Furthermore, the number of clauses only relates to the difficulty of the SAT problems and the strength of the constraints, but does not directly decide it. This motivates us to test whether GNN-based heuristics can be used instead." }, { "heading": "4.2 BASIC SETUPS FOR GNN-BASED HEURISTICS", "text": "There are challenges in integrating neural-based heuristics into CEGAR-based solvers, since each proposed assignment (candidate or counter-example) must fit some logical constraints (i.e. they must satisfy a SAT formula). It is rather difficult to add logical constraints to neural-based proposals, but relatively easy to employ neural-based ranking on proposals that already satisfy the logical constraints. In fact, it is rather easy to ask for multiple satisfying assignments from SAT solvers, if there exist. Therefore we choose to use the GNN-based embeddings to rank multiple assignments, instead of directly predicting the best assignments. We also benefit from more training data and less risk of overfitting with the ranking methodology.\nTo get rankings from the GNN-based embeddings, we first transform the embeddings (of all ∀ variables or all ∃ variables) into scoring matrix (Sm) via a scoring MLP (MLPscore). Then a batch of assignments (A) are ranked by passing through a two-layer perceptron (using the Sm and a learnable weighting vector Wv as weights without biases).\nSm = MLPscore(Emb) RankingScoresLogits = ReLU(A · Sm) ·Wv\nDuring training, we make use of the TensorFlow ranking library (Pasumarthi et al., 2019) to compute the pairwise-logistic-loss with NDCG-lambda-weight. We then incorporate the trained models into the CEGAR cycles by replacing the SAT-solver subroutine with a procedure that returns the highest ranked solution from multiple solutions to a given SAT formula. Note that when used in CEGARbased solvers, the GNN models only need to embed each formula once to get the scoring matrix (Sm), which is then used in all the following iterations to solve that formula. This is a significant improvement compared with previous work (Lederman et al., 2018).\nThe evaluations are done on 4 separate datasets: • TrainU: 1000 unsatisfiable formulas used for training • TrainS: 1000 satisfiable formulas used for training • TestU: 600 unsatisfiable formulas used for testing • TestS: 600 satisfiable formulas used for testing\nTable 3: Performance of CEGAR Candidate-Ranking\nDATASET TRAINU TRAINS TESTU TESTS\n- 21.976 34.783 21.945 33.885 MAXSAT 13.144 30.057 12.453 28.863\nGNN1 14.387 31.800 14.273 30.588 GNN2 13.843 31.404 13.787 30.273\nTable 4: Performance of CEGAR CounterExample-Ranking\nDATASET TRAINU TRAINS TESTU TESTS\n- 21.976 34.783 21.945 33.885 MAXSAT 14.754 22.265 14.748 21.638\nGNN3 16.95 26.717 16.743 24.325 GNN4 17.492 26.962 17.198 25.198\nwith 2 baselines:\n• -: vanilla CEGAR without ranking • MaxSAT: ranking by the number of satisfied clauses via on-the-fly formula simplification\n(Note that although MaxSAT performs the best in our evaluations, it is too expensive to use in practice. See asymptotic analysis in supplementary material A.2)\nvia measuring the average number of iterations needed to solve the 2QBF problems. Here we choose to measure the number of iterations rather than the wall clock time, because the former only measures the quality of our heuristics, while the latter is subject to various optimizations and implementation details that involve lots of engineering effort (out of the scope of this paper). From multiple random seeds, we report the results of the models that perform best on the training datasets." }, { "heading": "4.3 RANKING THE CANDIDATES", "text": "Since the size of 2QBF formulas for training are quite small (the same dataset as in Section 3), we can basically enumerate all assignments in the search space to generate the training data. The interesting question left is how we assign ranking scores to all the possible candidates. One way is to follow the MaxSAT-style and rank them based on the number of clauses they satisfy (the less the better, shown as “GNN1” in Table 3). Another way is to rank them based on the number of solutions to the reduced SAT formula (the less the better, shown as “GNN2” in Table 3), since having less solutions relates to more difficult SAT problems, thus stronger candidates (see details of ranking scores in supplementary material A.2).\nAs shown in Table 3, all 3 ranking heuristics (including the GNN-based and the MaxSAT-baseline) improved the solving performances of all 4 datasets. The improvement on unsatisfiable problems is more significant, since we are ranking the candidates. GNN2 seems slightly better than GNN1, implying that the training data by hardness (number of solutions to the reduced SAT formula) is probably better." }, { "heading": "4.4 RANKING THE COUNTEREXAMPLES", "text": "We generate the training dataset for counter-examples in a similar fashion. Once again we propose two ways to generate the ranking scores. One way is to follow the MaxSAT-style and rank them based on the number of clauses they satisfy (the more the better, shown as “GNN3” in Table 4). Another way is to adjust the ranking score of “GNN3” based on whether the counter-examples associate with the unsatisfiable cores in ω, the constraint SAT formula (shown as “GNN4” in Table 4, see details of ranking scores in supplementary material A.2).\nAs shown in Table 4, all 3 ranking heuristics improved the solving performances of all 4 datasets. The improvement on satisfiable problems is more significant this time, since we are ranking the counterexamples. However, GNN4 performs slightly worse than GNN3. The result implies that the additional information regarding the unsatisfiable cores in 2QBF, beyond our expectation, actually damages the GNN-based heuristic. The likely explanation is that information associated with unsatisfiable cores in 2QBF may be too complicated for GNN, which goes back to the limitation of GNN in reasoning about the whole solution space and unsatisfiability." }, { "heading": "4.5 COMBINATION OF THE HEURISTICS", "text": "It is reasonable to assume that ranking both the candidates and the counter-examples will further improve the solver performance. We retrained GNN models using the ranking datasets for both candidates and counter-examples, so that we still just do one GNN embedding per formula. We evaluated GNN1-3 (combining the training data of GNN1 and GNN3), GNN2-3 (combining the\ntraining data of GNN2 and GNN3), and GNN2-4 (combining the training data of GNN2 and GNN4). As shown in Table 5, GNN2-3 is arguably our best GNN-based model via this ranking method. We further compute relative improvement of GNN2-3, which is the ratio of improvement via GNN2-3 from “-” over the improvement via MaxSAT from “-”, represented by percentages. That is shown in the last row of Table 5 as GNN2-3R." }, { "heading": "4.6 EVALUATION OF LARGER 2QBF PROBLEMS", "text": "We then tested the performance of our best GNN-based heuristics (GNN2-3) on larger 2QBF problems that are extended in two different ways. On one hand, we fixed the specs (number of ∀ and ∃ literals per clause) but increased the sizes (the total number of ∀ variables or ∃ variables per formula). This essentially generated larger graphs with similar connectivity (as in the upper half of Table 6). On the other hand, we fixed the sizes but increased the specs (as in the lower half of Table 6), which essentially generated graphs with different vertex degrees. We changed the number of clauses per formula such that about half of the randomly generated 2QBF formulas are satisfiable.\nWe list the evaluation results in Table 6. The DataSet column shows the specs (the first tuple), the sizes (the second tuple), and the satisfiability status with the number of clauses per formula (the letter/number after the second tuple).\nAs shown in Table 6, the GNN-based heuristics generalizes well to larger sizes (the upper half of Table 6). The relative improvement via GNN2-3 is about 70% compared with that of the MaxSAT baseline (modulo the (2,3)(8,40)S521 dataset which is hard to improve with either heuristics for some reasons), which is similar to its performance on smaller instances in Table 5. On the other hand, the GNN-based heuristics cannot generalize so well to instances with larger specs. For the dataset with either one more ∀-literal per clause, or one more ∃-literal per clause, the relative improvement via GNN2-3 is about 50%. For the dataset with both one more ∀-literal and one more ∃-literal per clause, the GNN2-3 failed completely in generalization.\nThis reveals an interesting challenge in GNN-based embedding or structural data embedding in general. It is natural for GNN-based embedding to generalize to larger graphs if the vertex degrees remain unchanged. In that case, it is almost like embedding a larger batch of data. However, it is not intuitive to claim that the GNN-based embedding generalizes to graphs with different vertex degrees. This caveat should promote more researches on message-passing schemes and structural data embedding in general." }, { "heading": "5 RELATED WORK AND DISCUSSION", "text": "Without using the existing decision procedures, several reasoning methods purely based on neural networks are proposed for SAT solvers. Selsam et al. (2019) presented a GNN architecture that embedded the propositional formulas. From single bit supervision (the formula is satisfiable or not), the GNN learned a procedure to find satisfying assignments before issuing predictions. Also, the GNN embeddings converge given more embedding iterations, indicating that the learned procedure is stable. Amizadeh et al. (2019) further improved this line of work by adapting a RL-style explore-exploit mechanism but considering circuit-SAT problems and DAG embeddings. They trained their DAG architectures via guided gradient descent and showed that their DAG embeddings found solutions faster than the previous GNN-embeddings, but didn’t even try to tackle unsatisfiable formulas. Our paper tries to extend them to 2QBF problems, and we show that the inability to reason about unsatisfiability prevent GNNs to be a 2QBF solver. Recent work of Xu et al. (2019) discussed GNN’s expressivity power, but not in the logical reasoning context.\nSamulowitz & Memisevic (2007) applied classification to predict optimal choices of heuristics inside a portfolio-based and a dynamic QBF solver . Similar to our work, (Lederman et al., 2018) targeted the 2QBF problem and used GNN-based embeddings to learn branching heuristics in CADET solver in a reinforcement learning setting. However, they have to embed an updated 2QBF formula for each branching step, thus incurring high embedding overhead. To reduce the overhead, the authors used very simple GNN architectures. They also used a small number of message-passing iterations (in fact, one iteration performed best), which defeats the purpose of GNN, because 1-iteration GNN reduces to a neighbor-counting model. On the contrary, we design our solver/heuristics such that only one GNN embedding is needed per formula, which significantly reduces the GNN inference overhead. As a result, we can use more sophisticated GNN architectures with more message-passing iterations.\nBelief propagation (BP) is a Bayesian message-passing method first proposed by Pearl (1982), which is a useful approximation algorithm and has been applied to the SAT problems (specifically in 3-SAT (Mézard et al., 2002)) and 2QBF problems (Zhang et al., 2012). BP can find the witnesses of unsatisfiability of 2QBF by adopting a bias estimation strategy. Each round of BP allows the user to select the most biased ∀-variable and assign the biased value to the variable. After all the ∀-variables are assigned, the formula is simplified by the assignment and sent to SAT solvers. The procedure returns the assignment as a witness of unsatisfiability if the simplified formula is unsatisfiable, or UNKNOWN otherwise. However, the fact that BP is used for each ∀-variable assignment leads to high overhead, similar to the RL approach given by (Lederman et al., 2018). It is interesting, however, to see that with the added overhead, BP can find witnesses of unsatisfiability, which is what one-shot GNN-based embeddings cannot achieve.\nQBF problems attracted lots of research attentions due to its theoretical interests and practical applications in artificial intelligence (Rintanen, 1999), automated theorem proving (Ranjan et al., 2004), and sequential circuit verification (Sheeran et al., 2000). The subclass of 2QBF is worthy of studying in its own rights, due to applications in AI planning generalized to non-deterministic domains (Rabe et al., 2018), and planning with exponentially long plans (PSPACE-complete) (Castellini et al., 2001)." }, { "heading": "6 CONCLUSION", "text": "In this paper we investigated GNN-based 2QBF solvers and GNN-based 2QBF heuristics. We revealed the previously unrecognized limitation of GNN in reasoning about unsatisfiability of SAT problems, and conjectured that this limitation prevents GNN from learning solvers for more complex logical reasoning problems such as 2QBF satisfiability problem. This limitation is probably rooted in the simpility of message-passing scheme, which is good enough for embedding graph structures, but not for conducting complex reasoning on top of the graph structures. We then demonstrated that learning GNN-based 2QBF heuristics is potentially successful, though still faces interesting challenges in terms of generalization across graph structures. Our work extends previous progress in this field, and offers insights in applying machine learning tools to symbolic reasoning in general." }, { "heading": "A APPENDIX", "text": "A.1 ALL GNN-EMBEDDING ARCHITECTURES\nWe use subscript symbols ∀ to denote all ∀-quantified literals, ∃ to denote all ∃-quantified literals, L to denote all literals, and C to denote all clauses. We use notations EmbX to denote embeddings of X , where X can be subscript ∀, ∃, L, or C. We use notations Emb¬X to denote embeddings of the negations of X (∀, ∃, or L), which is permutational views of EmbX such that the same row of EmbX and Emb¬X are embeddings of a variable and its negation respectively. We use notations MsgX→Y to denote messages from X to Y . We also use notations MLPX to denote MLPs that generate messages from the embeddings of X , notations MLPX→Y to denote MLPs that generate messages from the embeddings of X for Y , notations LSTMX to denote LSTMs that update embeddings of X given incoming messages, and notations LSTMX←Y to denote LSTMs that update embeddings of X given incoming messages from Y . We also use notations EX to denote the sparse adjacency matrix of X (∀, ∃, or L) and clauses, notations X · Y to denote matrix multiplication of X and Y , notations [X,Y ] to denote matrix concatenation of X and Y , and notations XT to denote matrix transposition of X .\nOur first form of GNN message-passing of 2QBF (Model 1) is given below.\nModel 1: MsgC = E∀ ·MLP∀(Emb∀) + E∃ ·MLP∃(Emb∃) EmbC = LSTMC(EmbC ,MsgC)\nMsgC→∀ = E T ∀ ·MLPC(EmbC)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC(EmbC)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nIn Model 2, we update the clause embedding with 2 LSTMs, each of them take the messages from ∀ and ∃ literals respectively.\nModel 2: Msg∀→C = E∀ ·MLP∀(Emb∀) Msg∃→C = E∃ ·MLP∃(Emb∃)\nEmbC = LSTMC←∀(EmbC ,Msg∀→C) EmbC = LSTMC←∃(EmbC ,Msg∃→C)\nMsgC→∀ = E T ∀ ·MLPC(EmbC)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC(EmbC)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nWe switch the order of these 2 LSTMs in Model 3.\nModel 3: Msg∃→C = E∃ ·MLP∃(Emb∃) Msg∀→C = E∀ ·MLP∀(Emb∀)\nEmbC = LSTMC←∃(EmbC ,Msg∃→C) EmbC = LSTMC←∀(EmbC ,Msg∀→C)\nMsgC→∀ = E T ∀ ·MLPC(EmbC)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC(EmbC)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nIn Model 4 we concatenate the messages from ∀ and ∃ literals.\nModel 4: MsgC = [E∀ ·MLP∀(Emb∀),E∃ ·MLP∃(Emb∃)] EmbC = LSTMC(EmbC ,MsgC)\nMsgC→∀ = E T ∀ ·MLPC(EmbC)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC(EmbC)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nThe performance of our GNN architectures improve greatly after we realize that (in Model 5) we may also need to use different MLP modules to generate messages from clauses to ∀ and ∃ literals. Note that this is also the model we reported in the main paper, and the model we decided to use for all results reported in the main paper.\nModel 5: MsgC = [E∀ ·MLP∀(Emb∀),E∃ ·MLP∃(Emb∃)] EmbC = LSTMC(EmbC ,MsgC)\nMsgC→∀ = E T ∀ ·MLPC→∀(EmbC)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC→∃(EmbC)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nWe also explore the possibility (in Model 6) of having two embeddings for each clause, one serving the ∀ literals and one serving the ∃ literals. We need extra notations: EmbX→Y denotes embeddings of X that serves Y . LSTMX→Y denotes LSTMs that updates embedding of X that serves Y .\nModel 6: MsgC = [E∀ ·MLP∀(Emb∀),E∃ ·MLP∃(Emb∃)] EmbC→∀ = LSTMC→∀(EmbC→∀,MsgC) EmbC→∃ = LSTMC→∃(EmbC→∃,MsgC)\nMsgC→∀ = E T ∀ ·MLPC→∀(EmbC→∀)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nMsgC→∃ = E T ∃ ·MLPC→∃(EmbC→∃)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nWe further explore possibility (in Model 7) that our embedding scheme should reflect a CEGAR cycle, which starts from ∀ variables (proposing candidates), to clauses, to ∃ variables (finding counterexamples), back to clauses, then back to ∀ variables.\nModel 7: Msg∀→C = E∀ ·MLP∀(Emb∀) EmbC→∃ = LSTMC→∃(EmbC→∃,Msg∀→C)\nMsgC→∃ = E T ∃ ·MLPC→∃(EmbC→∃)\nEmb∃ = LSTM∃(Emb∃, [MsgC→∃,Emb¬∃])\nMsg∃→C = E∃ ·MLP∃(Emb∃) EmbC→∀ = LSTMC→∀(EmbC→∀,Msg∃→C)\nMsgC→∀ = E T ∀ ·MLPC→∀(EmbC→∀)\nEmb∀ = LSTM∀(Emb∀, [MsgC→∀,Emb¬∀])\nA.2 CEGAR ALGORITHM AND RANKING SCORES\nSteps of Abstract-Refinement in CEGAR-based 2QBF sovlers This paragraph explains in detail about how abstract refinement in CEGAR-based 2QBF sovlers works (Janota & Silva, 2011) for our 2QBF formulas in CNF (conjunction normal form). Basically, abstract refinement is about maintaining and augmenting the constraint SAT formula ω, the solutions of which are the candidates in the next round of iteration.\nIn the main paper, we said that we initialize ω as empty set of clauses. That was a simplification. Actually, we initialize ω with many variables and clauses. The variables include all the ∀ variables in the 2QBF formula, and a fresh variable zc for each clause c in the 2QBF formula. Intuitively, the variable zc represents that the clause c is not satisfied by the candidates. The ω is also initialized with many 2-sized clauses as below: for each clause c in the 2QBF formula, we add clause (¬zc ∨ ¬l) for each ∀ literal l in c. It should be clear that this initialization poses no constraints to all ∀ variables, since we can set all zc to false to satisfy all clauses in ω trivially.\nFor each counter example (assignment to all ∃ variables), we compute the set of 2QBF clauses that are not satisfied by the counter example (call them residual clauses). Intuitively, the constraint should say: at least one of the residual clauses should not be satisfied by the next proposed candidate, so that the current counter example cannot counter it. That constraint is realized by adding to ω one clause (∨zc for all c in the residual clauses). This clause guarantees that at least one of the residual clauses is not satisfied by the next candidates, and transfers the constraints to related ∀ variables via the corresponding (¬zc ∨ ¬l) clauses in ω.\nAsymptotic analysis of GNN-based heuristics v.s. MaxSAT-baseline To rigorously compare the overhead of GNN-based heuristics with the MaxSAT-baseline, let us assume a 2QBF instance with N∀ ∀- variables, N∃ ∃ variables, and M clauses. We also assume that on average, each clause has n∀ ∀-literals and n∃ ∃-literals. Now we need to rank K candidates or counter-examples.\nThe time complexity of MaxSAT-baseline (on-the-fly formula simplification) is O(KMn∀), which is for each candidate and each clause, check if the candidate satisfies the clause in n∀ steps.\nWe also assume that the second dimension of scoring matrix (Sm) is d. The time complexity of GNN-based heuristics is equivalent to the 2 matrix multiplications of dimensions (K,N∀)× (N∀, d) and (K, d)× (d, 1), which is O(KN∀d).\nTo compare the complexity, we can safely assume that both d and n∀ are small constants. However, N∀ (number of ∀ variables in the formula) is often much smaller than M (number of clauses in a formula). Moreover, in practice, matrix multiplications can be easily parallelized and accelerated on many kinds of hardware with BLAS libraries. Of course, GNN-based heuristics needs the GNN-embeddings via message-passing, but that is computed only once per formula, with the cost amortized.\nGNN1 Candidate Ranking Score: i.e. based on a list of the numbers of satisfied clauses.\ndef n_clauses_list_2_ranking_scores(n_clauses_list): n_clauses_min = min(n_clauses_list) return [max(1, 10 - n_clauses + n_clauses_min)\nfor n_clauses in n_clauses_list]\nGNN2 Candidate Ranking Score: i.e. based on a list of the numbers of solutions to the simplified SAT formula.\ndef n_solutions_2_ranking_score(n_solutions): if n_solutions <= 3: return 10.0 - n_solutions if n_solutions <= 5: return 6.0 if n_solutions <= 8: return 5.0 if n_solutions <= 12: return 4.0 if n_solutions <= 16: return 3.0 if n_solutions <= 21: return 2.0 else: return 1.0\ndef n_solutions_list_2_ranking_scores(n_solutions_list): return [n_solutions_2_ranking_score(n_solutions) for n_solutions in n_solutions_list]\nGNN3 Counter-example Ranking Score: i.e. based on a list of the numbers of satisfied clauses.\ndef n_clauses_list_2_ranking_scores_counter(n_clauses_list): n_clauses_max = max(n_clauses_list) return [max(1, 10 - n_clauses_max + n_clauses)\nfor n_clauses in n_clauses_list]\nGNN4 Counter-example ranking Score: i.e. adjusted from GNN3 ranking scores based on unsatisfiable cores.\ndef unsat_core_2_ranking_scores_counter(core_index, n_clauses_list): # core_index marks the index of scores that are in the unsatisfiable cores. n_clauses_max = max(n_clauses_list) scores = [max(1, 8 - n_clauses_max + n_clauses)\nfor n_clauses in n_clauses_list] scores = numpy.array(scores) scores[core_index] = 10 return scores.tolist\nProcedure to determine which counter-examples associate with the unsatisfiable cores: Since each counter-example will add a clause to the constraint SAT formula ω, determining the unsatisfiable cores (the smallest subset of clauses that constraints the formula to be unsatisfiable) of ω will give us the sets of counter-examples that directly associate with the unsatisfiable cores.\nFor satisfiable 2QBF formulas in the training dataset, we generate all clauses of ω from all counter-examples, and then solve ω with hmucSAT (Nadel et al., 2013) for unsatisfiable cores. For unsatisfiable 2QBF formulas, we again collect all clauses of ω from all counter-examples. In this case ω is satisfiable, and the solutions to it are actually witnesses of unsatisfiability. To obtain unsatisfiable cores, we add the solutions back to ω as additional constraints, until ω is unsatisfiable with the cores returned." } ]
2,019
GRAPH NEURAL NETWORKS FOR REASONING 2- QUANTIFIED BOOLEAN FORMULAS
SP:2b8df72b380b893a55a82934afd558d75a3f42f2
[ "Review: This paper considers the problem of dropping neurons from a neural network. In the case where this is done randomly, this corresponds to the widely studied dropout algorithm. If the goal is to become robust to randomly dropped neurons during evaluation, then it seems sufficient to just train with dropout (there is also a gaussian approximation to dropout using the central limit theorem called \"fast dropout\"). ", "This contribution studies the impact of deletions of random neurons on prediction accuracy of trained architecture, with the application to failure analysis and the specific context of neuromorphic hardware. The manuscript shows that worst-case analysis of failure modes is NP hard and contributes a theoretical analysis of the average case impact of random perturbations with Bernouilli noise on prediction accuracy, as well as a training algorithm based on aggregation. The difficulty of tight bounds comes from the fact that with many layers a neural network can have a very large Lipschitz constant. The average case analysis is based on wide neural networks and an assumption of a form of smoothness in the values of hidden units as the width increases. The improve fitting procedure is done by adding a set of regularizing terms, including regularizing the spectral norm of the layers." ]
The loss of a few neurons in a brain rarely results in any visible loss of function. However, the insight into what “few” means in this context is unclear. How many random neuron failures will it take to lead to a visible loss of function? In this paper, we address the fundamental question of the impact of the crash of a random subset of neurons on the overall computation of a neural network and the error in the output it produces. We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting. We give provable guarantees on the robustness of the network to these crashes. Our main contribution is a bound on the error in the output of a network under small random Bernoulli crashes proved by using a Taylor expansion in the continuous limit, where close-by neurons at a layer are similar. The failure mode we adopt in our model is characteristic of neuromorphic hardware, a promising technology to speed up artificial neural networks, as well as of biological networks. We show that our theoretical bounds can be used to compare the fault tolerance of different architectures and to design a regularizer improving the fault tolerance of a given architecture. We design an algorithm achieving fault tolerance using a reasonable number of neurons. In addition to the theoretical proof, we also provide experimental validation of our results and suggest a connection to the generalization capacity problem.
[]
[ { "authors": [ "D. Amodei", "D. Hernandez" ], "title": "AI and compute", "venue": "Downloaded from https://blog.openai.com/ai-and-compute,", "year": 2018 }, { "authors": [ "D. Amodei", "C. Olah", "J. Steinhardt", "P. Christiano", "J. Schulman", "D. Mané" ], "title": "Concrete problems in ai safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "R. Arratia", "L. Gordon" ], "title": "Tutorial on large deviations for the binomial distribution", "venue": "Bulletin of mathematical 620 biology,", "year": 1989 }, { "authors": [ "I. Benjamini", "G. Kalai", "O. Schramm" ], "title": "Noise sensitivity of boolean functions and applications to percolation", "venue": "Publications Mathématiques de l’Institut des Hautes Etudes Scientifiques,", "year": 1999 }, { "authors": [ "L. Chizat", "E. Oyallon", "F. Bach" ], "title": "On lazy training in differentiable programming", "venue": null, "year": 2019 }, { "authors": [ "A. Doerig", "A. Schurger", "K. Hess", "M.H. Herzog" ], "title": "The unfolding argument: Why iit and other causal 625 structure theories cannot explain consciousness", "venue": "Consciousness and cognition,", "year": 2019 }, { "authors": [ "E. El Mhamdi", "R. Guerraoui", "S. Rouault" ], "title": "On the robustness of a neural network", "venue": "In Reliable Distributed 627 Systems (SRDS),", "year": 2017 }, { "authors": [ "S.K. Esser", "R. Appuswamy", "P. Merolla", "J.V. Arthur", "D.S. Modha" ], "title": "Backpropagation for energy-efficient 629 neuromorphic computing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "B. Ghorbani", "S. Krishnan", "Y. Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue 631 density", "venue": "arXiv preprint arXiv:1901.10159,", "year": 1901 }, { "authors": [ "W.H. Guss" ], "title": "Deep function machines: Generalized neural networks for topological layer expression", "venue": "arXiv 633 preprint arXiv:1612.04799,", "year": 2016 }, { "authors": [ "A. Ilyas", "S. Santurkar", "D. Tsipras", "L. Engstrom", "B. Tran", "A. Madry" ], "title": "Adversarial examples are not bugs, 635 they are features", "venue": "arXiv preprint arXiv:1905.02175,", "year": 1905 }, { "authors": [ "A. Jacot", "F. Gabriel", "C. Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "Advances in 638 Neural Information Processing Systems", "year": 2018 }, { "authors": [ "N. Le Roux", "Y. Bengio" ], "title": "Continuous neural networks", "venue": "In Artificial Intelligence and Statistics, pages 404–411,", "year": 2007 }, { "authors": [ "M. Li", "D.G. Andersen", "J.W. Park", "A.J. Smola", "A. Ahmed", "V. Josifovski", "J. Long", "E.J. Shekita", "B.-Y. 642 Su" ], "title": "Scaling distributed machine learning with the parameter server", "venue": "In OSDI,", "year": 2014 }, { "authors": [ "M. Liu", "L. Xia", "Y. Wang", "K. Chakrabarty" ], "title": "Fault tolerance in neuromorphic computing systems", "venue": "Proceedings of the 24th Asia and South Pacific Design Automation Conference,", "year": 2019 }, { "authors": [ "E.M.E. Mhamdi", "R. Guerraoui" ], "title": "When neurons fail", "venue": "IEEE International Parallel and Distributed 646 Processing Symposium (IPDPS),", "year": 2017 }, { "authors": [ "V. Nagarajan" ], "title": "Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise- 648 resilience", "venue": null, "year": 2019 }, { "authors": [ "Naihong Wei", "Shiyuan Yang", "Shibai Tong" ], "title": "A modified learning algorithm for improving the fault tolerance 650 of bp networks", "venue": "In Proceedings of International Conference on Neural Networks (ICNN’96),", "year": 1996 }, { "authors": [ "B. Neyshabur", "S. Bhojanapalli", "D. McAllester", "N. Srebro" ], "title": "Exploring Generalization in Deep Learning", "venue": "coRR,", "year": 2017 }, { "authors": [ "M. Romera", "P. Talatchian", "S. Tsunegi", "F.A. Araujo", "V. Cros", "P. Bortolotti", "J. Trastoy", "K. Yakushiji", "655 A. Fukushima", "H. Kubota" ], "title": "Vowel recognition with four coupled spin-torque nano-oscillators", "venue": null, "year": 2018 }, { "authors": [ "S. Sonoda", "N. Murata" ], "title": "Double continuum limit of deep neural networks", "venue": "In ICML Workshop Principled 658 Approaches to Deep Learning,", "year": 2017 }, { "authors": [ "Y. Tang", "R.R. Salakhutdinov" ], "title": "Learning stochastic feedforward neural networks", "venue": "In Advances in Neural 660 Information Processing Systems,", "year": 2013 }, { "authors": [ "A. Tran", "S. Yanushkevich", "S. Lyshevski", "V. Shmerko" ], "title": "Design of neuromorphic logic networks and fault- 662 tolerant computing", "venue": "In 2011 11th IEEE International Conference on Nanotechnology,", "year": 2011 }, { "authors": [ "M.D. Zeiler", "R. Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on 665 computer vision,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Understanding the inner working of artificial neural networks (NNs) is currently one of the most pressing questions (20) in learning theory. As of now, neural networks are the backbone of the most successful machine learning solutions (37; 18). They are deployed in safety-critical tasks in which there is little room for mistakes (10; 40). Nevertheless, such issues are regularly reported since attention was brought to the NNs vulnerabilities over the past few years (37; 5; 24; 8). Fault tolerance as a part of theoretical NNs research. Understanding complex systems requires understanding how they can tolerate failures of their components. This has been a particularly fruitful method in systems biology, where the mapping of the full network of metabolite molecules is a computationally quixotic venture. Instead of fully mapping the network, biologists improved their understanding of biological networks by studying the effect of deleting some of their components, one or a few perturbations at a time (7; 12). Biological systems in general are found to be fault tolerant (28), which is thus an important criterion for biological plausibility of mathematical models. Neuromorphic hardware (NH). Current Machine Learning systems are bottlenecked by the underlying computational power (1). One significant improvement over the now prevailing CPU/GPUs is neuromorphic hardware. In this paradigm of computation, each neuron is a physical entity (9), and the forward pass is done (theoretically) at the speed of light. However, components of such hardware are small and unreliable, leading to small random perturbations of the weights of the model (41). Thus, robustness to weight faults is an overlooked concrete Artificial Intelligence (AI) safety problem (2). Since we ground the assumptions of our model in the properties of NH and of biological networks, our fundamental theoretical results can be directly applied in these computing paradigms. Research on NN fault tolerance. In the 2000s, the fault tolerance of NNs was a major motivation for studying them (14; 16; 4). In the 1990s, the exploration of microscopic failures was fueled by the hopes of developing neuromorphic hardware (NH) (22; 6; 34). Taylor expansion was one of the tools used for the study of fault tolerance (13; 26). Another line of research proposes sufficient conditions for robustness (33). However, most of these studies are either empirical or are limited to simple architectures (41). In addition, those studies address the worst case (5), which is known to be\nUnder review as a conference paper at ICLR 2020\nmore severe than a random perturbation. Recently, fault tolerance was studied experimentally as well. DeepMind proposes to focus on neuron removal (25) to understand NNs. NVIDIA (21) studies error propagation caused by micro-failures in hardware (3). In addition, mathematically similar problems are raised in the study of generalization (29; 30) and robustness (42). The quest for guarantees. Existing NN approaches do not guarantee fault tolerance: they only provide heuristics and evaluate them experimentally. Theoretical papers, in turn, focus on the worst case and not on errors in a probabilistic sense. It is known that there exists a set of small worstcase perturbations, adversarial examples (5), leading to pessimistic bounds not suitable for the average case of random failures, which is the most realistic case for hardware faults. Other branch of theoretical research studies robustness and arrives at error bounds which, unfortunately, scale exponentially with the depth of the network (29). We define the goal of this paper to guarantee that the probability of loss exceeding a threshold is lower than a pre-determined small value. This condition is sensible. For example, self-driving cars are deemed to be safe once their probability of a crash is several orders of magnitude less than of human drivers (40; 15; 36). In addition, current fault tolerant architectures use mean as the aggregation of copies of networks to achieve redundancy. This is known to require exponentially more redundancy compared to the median approach and, thus, hardware cost. In order to apply this powerful technique and reduce costs, certain conditions need to be satisfied which we will evaluate for neural networks. Contributions. Our main contribution is a theoretical bound on the error in the output of an NN in the case of random neuron crashes obtained in the continuous limit, where close-by neurons compute similar functions. We show that, while the general problem of fault tolerance is NP-hard, realistic assumptions with regard to neuromorphic hardware, and a probabilistic approach to the problem, allow us to apply a Taylor expansion for the vast majority of the cases, as the weight perturbation is small with high probability. In order for the Taylor expansion to work, we assume that a network is smooth enough, introducing the continuous limit (39) to prove the properties of NNs: it requires neighboring neurons at each layer to be similar. This makes the moments of the error linear-time computable. To our knowledge, the tightness of the bounds we obtain is a novel result. In turn, the bound allows us to build an algorithm that enhances fault tolerance of neural networks. Our algorithm uses median aggregation which results in only a logarithmic extra cost – a drastic improvement on the initial NP-hardness of the problem. Finally, we show how to apply the bounds to specific architectures and evaluate them experimentally on real-world networks, notably the widely used VGG (38). Outline. In Sections 2-4, we set the formalism, then state our bounds. In Section 5, we present applications of our bounds on characterizing the fault tolerance of different architectures. In Section 6 we present our algorithm for certifying fault tolerance. In Section 7, we present our experimental evaluation. Finally, in Section 8, we discuss the consequences of our findings. Full proofs are available in the supplementary material. Code is provided at the anonymized repo github.com/iclr-2020-fault-tolerance/code. We abbreviate Assumption 1→ A1, Proposition 1→ P1, Theorem 1→ T1, Definition 1→ D1." }, { "heading": "2 DEFINITIONS OF PROBABILISTIC FAULT TOLERANCE", "text": "In this section, we define a fully-connected network and fault tolerance formally. Notations. For any two vectors x, y ∈ Rn we use the notation (x, y) = ∑n i=1 xiyi for the standard scalar product. Matrix γ-norm for γ = (0,+∞] is defined as ‖A‖γ = supx 6=0 ‖Ax‖γ/‖x‖γ . We use the infinity norm ‖x‖∞ = max |xi| and the corresponding operator matrix norm. We call a vector 0 6= x ∈ Rn q-balanced if min |xi| ≥ qmax |xi|. We denote [n] = {1, 2, ..., n}. We define the Hessian Hij = ∂2y(x)/∂xi∂xj as a matrix of second derivatives. We write layer indices down and element indices up: W ijl . For the input, we write xi ≡ xi. If the layer is fixed, we omit its index. We use the element-wise Hadamard product (x y)i = xiyi. Definition 1. (Neural network) A neural network with L layers is a function yL : Rn0 → RnL defined by a tuple (L,W,B, ϕ) with a tuple of weight matrices W = (W1, ...,WL) (or their distributions) of size Wl : nl×nl−1, biases B = (b1, ..., bL) (or their distributions) of size bl ∈ Rnl by the expression yl = ϕ(zl) with pre-activations zl = Wlyl−1 + bl, l ∈ [L], y0 = x and yL = zL. Note that the last layer is linear. We additionally require ϕ to be 1-Lipschitz 1. We assume that the network was trained\n11-Lipschitz ϕ s.t. |ϕ(x)− ϕ(y)| 6 |x− y|. If ϕ is K-Lipschitz, we rescale the weights to make K = 1: W ijl →W ij l /K. This is the general case. Indeed, if we rescale ϕ(x)→ Kϕ(x), then, yl−1 → Ky ′ l−1, and in\nthe sum z′l = ∑ W ij/K ·Kyl−1 ≡ zl\nUnder review as a conference paper at ICLR 2020\nusing input-output pairs x, y∗ ∼ X × Y using ERM2 for a loss ω. Loss layer for input x and the true label y∗(x) is defined as yL+1(x) = Ey∗∼Y |xω(yL(x), y∗)) with ω ∈ [−1, 1]3 Definition 2. (Weight failure) Network (L,W,B, ϕ) with weight failures U of distribution U ∼ D|(x,W ) is the network (L,W + U,B, ϕ) for U ∼ D|(x,W ). We denote a (random) output of this network as yW+U (x) = ŷL(x) with activations ŷl and pre-activations ẑl, as in D1. Definition 3. (Bernoulli neuron failures) Bernoulli neuron crash distribution is the distribution with i.i.d. ξil ∼ Be(pl), U ij l = −ξil ·W ij l . For each possible crashing neuron i at layer l we define\nU il = ∑ j |U ij l | and W il = ∑ j |W ij l |, the crashed incoming weights and total incoming weights. We note that we see neuron failure as a sub-type of weight failure.\nThis definition means that neurons crash independently, and they start to output 0 when they do. We use this model because it mimics essential properties of NH (41). Components fail relatively independently, as we model faults as random (41). In terms of (41), we consider stuck-at-0 crashes, and passive fault tolerance in terms of reliability. Definition 4. (Output error for a weight distribution) The error in case of weight failure with distribution D|(x,W ) is ∆l(x) = yW+Ul (x)− yWl (x) for layers l ∈ [L+ 1]\nWe extend the definition of ε-fault tolerance from (23) to the probabilistic case: Definition 5. (Probabilistic fault tolerance) A network (L,W,B, ϕ) is said to be (ε, δ)-fault tolerant over an input distribution (x, y∗) ∼ X × Y and a crash distribution U ∼ D|(x,W ) if P(x,y∗)∼X×Y, U∼D|(x,W ){∆L+1(x) ≥ ε} ≤ δ. For such network, we write (W,B) ∈ FT(L,ϕ, p, ε, δ).\nInterpretation. To evaluate the fault tolerance of a network, we compute the first moments of ∆L+1. Next, we use tail bounds to guarantee (ε, δ)-FT. This definition means that with high probability 1− δ additional loss due to faults does not exceed ε. Expectation over the crashes U ∼ D|x can be interpreted in two ways. First, for a large number of neural networks, each having permanent crashes, E∆ is the expectation over all instances of a network implemented in the hardware multiple times. For a single network with intermittent crashes, E∆ is the output of this one network over repetitions. The recent review study (41) identifies three types of faults: permanent, transient, and intermittent. Our definition 2 thus covers all these cases. Now that we have a definition of fault tolerance, we show in the next section that the task of certifying or even computing it is hard." }, { "heading": "3 THE HARDNESS OF FAULT TOLERANCE", "text": "In this section, we show why fault tolerance is a hard problem. Not only it is NP-hard in the most general setting but, also, even for small perturbations, the error of the output of can be unacceptable." }, { "heading": "3.1 NP-HARDNESS", "text": "A precise assessment of an NN’s fault tolerance should ideally diagnose a network by looking at the outcome of every possible failure, i.e. at the Forward Propagated Error (23) resulting from removing every possible subset of neurons. This would lead to an exact assessment, but would be impractical in the face of an exponential explosion of possibilities as by Proposition 1 (proof in the supplementary material). Proposition 1. The task of evaluating E∆k for any k = 1, 2, ... with constant additive or multiplicative error for a neural network with ϕ ∈ C∞, Bernoulli neuron crashes and a constant number of layers is NP-hard.\nWe provide a theoretical alternative for the practical case of neuromorphic hardware. We overcome NP-hardness in Section 4 by providing an approximation dependent on the network, and not a constant factor one: for weights W we give ∆ and ∆ dependent on W such that ∆(W ) ≤ E∆ ≤ ∆(W ). In addition, we only consider some subclass of all networks." }, { "heading": "3.2 PESSIMISTIC SPECTRAL BOUNDS", "text": "By Definition 4, the fault tolerance assessment requires to consider a weight perturbation W + U given current weightsW and the loss change yL+1(W+U)−yL+1(W ) caused by it. Mathematically, 2Empirical Risk Minimization – the standard task 1/k ∑m k=1 ω(yL(xk), y ∗ k)→ min\n3The loss is bounded for the proof of Algorithm 1’s running time to work\nUnder review as a conference paper at ICLR 2020\nthis means calculating a local Lipschitz coefficientK (43) connecting |yL+1(W +U)−yL+1(W )| ≤ K|U |. In the literature, there are known spectral bounds on the Lipschitz coefficient for the case of input perturbations. These bounds use the spectral norm of the matrix ‖ · ‖2 and give a global result, valid for any input. This estimate is loose due to its exponential growth in the number of layers, as ‖W‖2 is rarely < 1. See Proposition 2 for the statement:\nProposition 2 (K using spectral properties). ‖yL(x2)− yL(x1)‖2 6 ‖x2 − x1‖2 · ∏L l=1 ‖Wl‖2\nThe proof can be found in (29) or in the supplementary material. It is also known that high perturbations under small input changes are attainable. Adversarial examples (5) are small changes to the input resulting in a high change in the output. This bound is equal to the one of (23), which is tight in case if the network has the fewest neurons. In contrast, in Section 4, we derive our bound in the limit n→∞. We have now shown that even evaluating fault tolerance of a given network can be a hard problem. In order to make the analysis practical, we use additional assumptions based on the properties of neuromorphic hardware." }, { "heading": "4 REALISTIC SIMPLIFYING ASSUMPTIONS FOR NEUROMORPHIC HARDWARE", "text": "In this section, we introduce realistic simplifying assumptions grounded in neuromorphic hardware characteristics. We first show that if faults are not too frequent, the weight perturbation would be small. Inspired by this, we then apply a Taylor expansion to the study of the most probable case. 4\nAssumption 1. The probability of failure p = max{pl ∣∣l ∈ [L]} is small: p . 10−4..10−3\nThis assumption is based on the properties of neuromorphic hardware (35). Next, we then use the internal structure of neural networks.\nAssumption 2. The number of neurons at each layer nl is sufficiently big, nl & 102\nThis assumption comes from the properties of state-of-the-art networks (1). The best and the worst fault tolerance. Consider a 1-layer NN with n = n0 and nL = n1 = 1 at\ninput xi = 1: y(x) = ∑ xi/n. We must divide 1/n to preserve y(x) as n grows. This is the most robust network, as all neurons are interchangeable. Here E∆ = −p and Var∆ = p/n, variance decays with n. In contrast, the worst case y(x) = x1 has all but one neuron unused. Therefore E∆ = p and Var∆ = p, variance does not decay with n. The next proposition shows that under a mild additional regularity assumption on the network, Assumptions 1 and 2 are sufficient to show that the perturbation of the norm of the weights is small.\nProposition 3. Under A1,2 and if {W il } nl i=1 are q-balanced, for α > p, the norm of the weight perturbation U il at layer l is probabilistically bounded as: δ0 = P{‖U ·l‖1 ≥ α‖W ·l ‖} ≤ exp (−nl · q · dKL(α||pl)) with KL-divergence between numbers a, b ∈ (0, 1), dKL(a, b) = a log a/b+ (1− a) log (1− a)/(1− b) and W il from D3\nInspired by this result, next, we compute the error ∆ given a small weight perturbation U using a Taylor expansion. 5\n4The inspiration for splitting the loss calculation into favorable and unfavorable cases comes from (27) 5In order to certify fault tolerance, we need a precise bounds on the remainder of the Taylor approximation. For example, for ReLU functions, Taylor approximation fails. The supplementary material contains another counter-example to the Taylor expansion of an NN. Instead, we give sufficient conditions for which the Taylor approximation indeed holds.\nUnder review as a conference paper at ICLR 2020\nAssumption 3. As the width n increases, networksNNn have a continuous limit (39)NNn → NNc, where NNc is a continuous neural network (19), and n = min{nl}. That network NNc has globally bounded operator derivatives Dk for orders k = 1, 2. We define D12 = max{D1, D2}.6\nSee Figure 1 for a visualization of A3 and Table 1 for the description of A3. The assumption means that with the increase of n, the network uses the same internal structure which just becomes more fine-grained. The continuous limit holds in the case of explicit duplication, convolutional networks and corresponding explicit regularization. The supplementary material contains a more complete explanation. The derivative bound for order 2 is in contrast to the worse-case spectral bound which would be exponential in depth as in Proposition 2. This is consistent with experimental studies (11) and can be connected to generalization properties via minima sharpness (17). Proposition 4. Under A3, derivatives are equal the operator derivatives of the continuous limit:\n∂kyL\n∂yi 1 l ...∂y ik l\n= 1\nnkl δkyL δyl(i1)...δyl(ik) + o(1), nl →∞\nFor example 7, consider y(x) = 1/n1 ∑n1 i1=1 ϕ (∑n0 i0=1 xi0/n0 ) at xi ≡ 1. Factors 1/n0 and 1/n1\nappear because the network must represent the same y∗ as n0, n1 →∞. Then, ∂y/∂xi = ϕ′(1)/n1 and ∂2y/∂xi∂xj = ϕ′′(1)/n21. Theorem 1. For crashes at layer l and output error ∆L at layer L under A1-3 with q = 1/nl and r = p+ q, the mean and variance of the error can be approximated as\nE∆L = pl nl∑ i=1 ∂yL ∂ξi ∣∣∣ ξ=0 + Θ±(1)D2r 2, Var∆L = pl nl∑ i=1 ( ∂yL ∂ξi ∣∣∣ ξ=0 )2 + Θ±(1)D12r 3\nBy Θ±(1) we denote any function taking values in [−1, 1].8\nThe full proof of the theorem is in the supplementary material. The remainder terms are small as both p and q = 1/nl are small quantities under A1-2. In addition, P4 implies ∂yL/∂ξi ∼ 1/nl and thus, when nl → ∞, E∆ = O(1) remains constant, and Var∆L = O(1/nl). This is the standard rate in case if we estimate the mean of a random variable by averaging over nl independent samples, and our previous example in the beginning of the Section shows that it is the best possible rate. Our result shows sufficient conditions under which neural networks allow for such a simplification.9 In the next sections we use the obtained theoretical evaluation to develop a regularizer increasing fault tolerance, and say which architectures are more fault-tolerant." }, { "heading": "5 PROBABILISTIC GUARANTEES ON FAULT TOLERANCE USING TAIL BOUNDS", "text": "In this section, we apply the results from the previous sections to obtain a probabilistic guarantee on fault tolerance. We identify which kinds of architectures are more fault-tolerant. 6A necessary condition for Dk to be bounded is to have a reasonable bound on the derivatives of the ground truth function y∗(x). We assume that this function is sufficiently smooth. 7The proposition is illustrated in proof-of-concept experiments with explicit regularization in the supplementary material. There are networks for which the conclusion of P4 would not hold, for example, a network with wij = 1. However, such a network does not approximate the same function as n increases since y(x)→∞, violating A3\n8The derivative ∂yL/∂ξi(ξ) ≡ −∂yL(yl − ξ yl)/∂yil · yil is interpreted as if ξi was a real variable. 9However, the dependency Var∆ ∼ 1/nl is only valid if n < p−2 ∼ 108 to guarantee the first-order term to dominate, p/n > r3. In case if this is not true, we can still render the network more robust by aggregating multiple copies with a mean, instead of adding more neurons. Our current guarantees thus work in case if p2 ≤ n−1 ≤ p. In the supplementary material, we show that a more tight remainder, depending only on p/n, hence decreasing with n, is possible. However, it complicates the equation as it requires D3.\nUnder review as a conference paper at ICLR 2020\nUnder the assumptions of previous sections, the variance of the error decays as Var∆ ∼ ∑ Clpl/nl\nas the error superposition is linear (see supplementary material for a proof), with Cl not dependent on nl. Given a fixed budget of neurons, the most fault-tolerant NN has its layers balanced: one layer with too few neurons becomes a single point of failure. Specifically, an optimal architecture with a fixed sum N = ∑ nl has nl ∼ √ plCl\nGiven the previous results, certifying (ε, δ)-fault tolerance is trivial via a Chebyshev tail bound (proof in the supplementary material):\nProposition 5. A neural network under assumptions 1-3 is (ε, δ)-fault tolerant for t = ε−E∆L > 0 with δ = t−2Var∆L for E∆ and Var∆ calculated by Theorem 1.\nEvaluation of E∆ or Var∆ using Theorem 1 would take the same amount of time as one forward pass. However, the exact assessment would need O(2n) forward passes by Proposition 1. In order to make the networks more fault tolerant, we now want to solve the problem of loss minimization under fault tolerance rather than ERM (as previously formulated in (41)): inf(W,B)∈FT L(w,B) where FT = FT(L,ϕ, p, ε, δ) from Definition 5. Regularizing10 with Equation 1 can be seen as\nan approximate solution to the problem above. Indeed, Var∆ ≈ pl ∑ i ( ∂L ∂yil · yil )2\n(from T1) is connected to the target probability (P5). Moreover, the network is required to be continuous by A3, which is achieved by making nearby neurons’ weights close using a smoothing regularizing function smooth(W ) ≈ ∫ |W ′t (t, t′)|dtdt′. The µ term for q-balancedness comes from P3 as it is a necessary condition for A3. See the supplementary material for complete details. Here L̂ is the regularized loss, L the original one, and λ, µ, ν, ψ are the parameters:\nL̂(W ) = L(W ) + λ nl∑ i=1 ( ∂L ∂yi · yi )2 + µ ( maxiW i l miniW il )2 + ψ · smooth(Wl) + ν‖W‖∞ (1)\nWe define the terms corresponding to λ, µ, ψ as R1 ≈ Var∆/pl, R2 = q2, R3 = smooth(Wl). If we have achieved δ < 1/3 by P5, we can apply the well-known median trick technique (31), drastically increasing fault tolerance. We only use R repetitions of the network with component-wise median aggregation to obtain (ε, δ · exp(−R))-fault tolerance guarantee. See supplementary material for the calculations. In addition, we show that after training, when Ex∇W yL+1(x) = 0, then ExEξ∆L+1 = 0 +O(r2) (proof in the supplementary material). This result sheds some light on why neural networks are inherently fault-tolerant in a sense that the mean ∆L+1 is 0. Convolutional networks of architecture Conv-Activation-Pool can be seen as a sub-type of fully connected ones, as they just have locally-connected matrices Wl, and therefore our techniques still apply. Using large kernel sizes (see supplementary material for discussion), smooth pooling and activations lead to a better approximation. We developed techniques to assess fault tolerance and to improve it. Now we combine all the results into a single algorithm to certify fault tolerance." }, { "heading": "6 AN ALGORITHM FOR CERTIFYING FAULT TOLERANCE", "text": "We are now in the position to provide an algorithm (Algorithm 1) allowing to reach the desired (ε, δ)-fault tolerance via training with our regularizer and then physically duplicating the network a logarithmic amount of times in hardware, assuming independent faults. We note that our algorithm works for a single input x but is easily extensible if the expressions in Propositions are replaced with expectations over inputs (see supplementary material). In order to estimate the required number of neurons, we use bounds from T1 and P5 which require n ∼ p/ε2. However, using the median approach allows for a fast exponential decrease in failure probability. Once the threshold of failing with probability 1/3 is reached by P5, it becomes easy to reach any required guarantee. The time complexity (compared to the one of training) of the algorithm is O(D12 + Clpl/ε2) and space complexity is equal to that of one training call. See supplementary material for the proofs of resource requirements and correctness.\n10We note that the gradient of Var∆ is linear time-computable since it is a Hessian-vector product. 11More neurons do not solve the problem, as E∆ stays constant with the growth of n by Theorem 1. Intuitively, this is due to the fact that if a mean of a random variable is too high, more repetitions do not make the estimate lower.\nUnder review as a conference paper at ICLR 2020\nData: Dataset D, input-output point (x, y∗), failure probabilities pl, depth L, activation function ϕ ∈ C∞, target ε and δ′, the error tolerance parameters from the Definition 5, maximal complexity guess C ≈ ∫ |y′l(t)|dt ≈ R guess 3\nResult: An architecture with (ε, δ′)-fault-tolerance on x 1 Select initial width N = (n1, ..., nL−1); 2 while true do 3 Train a network to obtain W,B; 4 Compute q from Proposition 3; 5 If q < 10−2, increase regularization parameter µ from Eq. 1, continue; // go to line 3; 6 Compute δ0 from Proposition 3 using q; 7 If δ0 > 1/3, increase n by a constant amount, continue; 8 Compute R3 from Eq. 1; 9 If R3 > C, increase regularization parameter ψ from Eq. 1, continue;\n10 Compute E∆ and Var∆ from Theorem 1; 11 If E∆ > ε, output infeasible; // cannot do better than the mean11; 12 Compute δ from Proposition 5; 13 If δ > 1/3, increase n by a constant amount and increase λ in Eq. 1, continue; 14 Compute R = O(log 1δ′ ); 15 Output number of repetitions R, layer widths N , parameters W,B; 16 end\nAlgorithm 1: Achieving fault tolerance after training. The numbers qmax = 10−2 and δmax = 1/3 are chosen for simplicity of proofs. The asymptotic behavior does not change with different numbers, as long as δmax < 1/2 and the constraints on q mentioned in the supplementary material are met" }, { "heading": "7 EXPERIMENTAL EVALUATION", "text": "In this section, we test the theory developed in previous sections in proof-of-concept experiments. We first show that we can correctly estimate the first moments of the fault tolerance using T1 for small (10-50 neurons) and larger networks (VGG). We test the predictions of our theory such as decay of Var∆, the effect of our regularizer and the guarantee from Algorithm 1. See the supplementary material for the technical details where we validate the assumption of derivative decay (A3) explicitly. Our code is provided at the anonymized repository github.com/iclr-2020-fault-tolerance/code. Increasing training dropout. We train sigmoid networks with N ∼ 100 on MNIST (see ComparisonIncreasingDropoutMNIST.ipynb). We use probabilities of failure at inference and training stages pi = 0.05 at the first layer and 10 values of pt ∈ [0, 1.2pi]. The experiment is repeated 10 times. When estimating the error experimentally, we choose 6 repetitions of the training dataset to ensure that the variance of the estimate is low. The results are in the Table 2. The experiments show that crashing MAE (Mean Absolute Error for the network with crashes at inference) is most dramatically affected by dropout. Specifically, training with pt ∼ pi makes network more robust at inference, which was well-established before. Moreover, the bound from T1 can correctly order which network is trained with bigger dropout parameter with only 4% rank loss, which is the fraction of incorrectly ordered pairs. All other quantities, including norms of the weights, are not able to order networks correctly. See supplementary material for a complete list of metrics in the experiment.\nUnder review as a conference paper at ICLR 2020\nQuantity Train rank loss Test rank loss\nCrashing, MAE 5.6% 5.6% Crashing, Accuracy 19.8% 17.7% Correct, MAE 23.3% 22.0% Correct, Accuracy 31.7% 39.9%\n(a) Experimental metrics\nQuantity Rank Loss\nT1 Var∆ 3.6% P2 E∆ 24.8% P2 Var∆ 31.7% T1 E∆ 40.8%\n(b) Theoretical bounds" }, { "heading": "8 CONCLUSION", "text": "Fault tolerance is an important overlooked concrete AI safety issue (2). This paper describes a probabilistic fault tolerance framework for NNs that allows to get around the NP-hardness of the problem. Since the crash probability in neuromorphic hardware is low, we can simplify the problem to allow for a polynomial computation time. We use the tail bounds to motivate the assumption that the weight perturbation is small. This allows us to use a Taylor expansion to compute the error. To bound the remainder, we require sufficient smoothness of the network, for which we use the continuous limit: nearby neurons compute similar things. After we transform the expansion into a tail bound to give a bound on the loss of the network. This gives a probabilistic guarantee of fault tolerance. Using the framework, we are able to guarantee sufficient fault tolerance of a neural network given parameters of the crash distribution. We then analyze the obtained expressions to compare fault tolerance between architectures and optimize for fault tolerance of one architecture. We test our findings experimentally on small networks (MNIST) as well as on larger ones (VGG-16, MobileNet). Using our framework, one is able to deploy safer networks into neuromorphic hardware. Mathematically, the problem that we consider is connected to the problem of generalization (29; 27) since the latter also considers the expected loss change under a small random perturbation EW+UL(W + U) − L(W ), except that these papers consider Gaussian noise and we consider Bernoulli noise. Evidence (32), however, shows that sometimes networks that generalize well are not necessarily fault-tolerant. Since the tools we develop for the study of fault tolerance could as well be applied in the context of generalization, they could be used to clarify this matter.\n12Variance for P2 is derived in the supplementary material\nUnder review as a conference paper at ICLR 2020" } ]
null
null
SP:8d95af673099b1df7b837f583aa55678d67c5bd6
[ "This paper presents an approach towards extending the capabilities of feedback alignment algorithms, that in essence replace the error backpropagation weights with random matrices. The authors propose a particular type of network where all weights are constraint to positive values except the first layers, a monotonically increasing activation function, and where a single output neuron exists (i.e., for binary classification - empirical evidence for more output neurons is presented but not theoretically supported). This is to enforce that the backpropagation of the (scalar) error signal to affect the magnitude of the error rather than the sign, while preserving universal approximation. The authors also provide provable learning capabilities, and several experiments that show good performance, while also pointing out limitations in case of using multiple output neurons.", "This paper examines the question of learning in neural networks with random, fixed feedback weights, a technique known as “feedback alignment”. Feedback alignment was originally discovered by Lillicrap et al. (2016; Nature Communications, 7, 13276) when they were exploring potential means of solving the “weight transport problem” for neural networks. Essentially, the weight transport problem refers to the fact that the backpropagation-of-error algorithm requires feedback pathways for communicating errors that have synaptic weights that are symmetric to the feedforward pathway, which is biologically questionable. Feedback alignment is one approach to solving the weight transport problem, which as stated above, relies on the use of random, fixed weights for communicating the error backwards. It has been shown that in some cases, feedback alignment converges to weight updates that are reasonably well-aligned to the true gradient. Though initially considered a good potential solution for biologically realistic learning, feedback alignment both has not scaled up to difficult datasets and has no theoretical guarantees that it converges to the true gradient. This paper addresses both these issues." ]
The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities. Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks. We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning performance of existing FA methods and matching backpropagation in binary classification tasks. Finally, we demonstrate the limits of our FA variant when the number of output neurons grows beyond a certain quantity.
[ { "affiliations": [], "name": "Mathias Lechner" } ]
[ { "authors": [ "Pierre Baldi", "Fernando Pineda" ], "title": "Contrastive learning and neural oscillations", "venue": "Neural Computation,", "year": 1991 }, { "authors": [ "Sergey Bartunov", "Adam Santoro", "Blake Richards", "Luke Marris", "Geoffrey E Hinton", "Timothy Lillicrap" ], "title": "Assessing the scalability of biologically-motivated deep learning algorithms and architectures", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Yoshua Bengio" ], "title": "How auto-encoders could provide credit assignment in deep networks via target propagation", "venue": "arXiv preprint arXiv:1407.7906,", "year": 2014 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "signSGD: Compressed optimisation for non-convex problems", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Steven J Cook", "Travis A Jarrell", "Christopher A Brittin", "Yi Wang", "Adam E Bloniarz", "Maksim A Yakovlev", "Ken CQ Nguyen", "Leo T-H Tang", "Emily A Bayer", "Janet S Duerr" ], "title": "Whole-animal connectomes of both caenorhabditis elegans", "venue": "sexes. Nature,", "year": 2019 }, { "authors": [ "Francis Crick" ], "title": "The recent excitement about neural networks", "venue": "Nature, 337(6203):129–132,", "year": 1989 }, { "authors": [ "Chris HQ Ding", "Tao Li", "Michael I Jordan" ], "title": "Convex and semi-nonnegative matrix factorizations", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2008 }, { "authors": [ "Colin Egan", "Gordon Steven", "Patrick Quick", "Rubén Anguera", "Fleur Steven", "Lucian Vintan" ], "title": "Twolevel branch prediction using neural networks", "venue": "Journal of Systems Architecture,", "year": 2003 }, { "authors": [ "Charlotte Frenkel", "Martin Lefebvre", "David Bol" ], "title": "Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training", "venue": null, "year": 1909 }, { "authors": [ "Justin Gilmer", "Colin Raffel", "Samuel S Schoenholz", "Maithra Raghu", "Jascha Sohl-Dickstein" ], "title": "Explaining the learning dynamics of direct feedback alignment", "venue": "International Conference on Learning Representations (ICLR) - Workshop track,", "year": 2017 }, { "authors": [ "Stephen Grossberg" ], "title": "Competitive learning: From interactive activation to adaptive resonance", "venue": "Cognitive science,", "year": 1987 }, { "authors": [ "Geoffrey Hinton" ], "title": "How to do backpropagation in a brain", "venue": "In Invited talk at the NIPS2007 Deep Learning Workshop,", "year": 2007 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Daniel A Jiménez", "Calvin Lin" ], "title": "Neural methods for dynamic branch prediction", "venue": "ACM Transactions on Computer Systems (TOCS),", "year": 2002 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoff Hinton" ], "title": "Convolutional deep belief networks on cifar-10", "venue": "Unpublished manuscript,", "year": 2010 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "The cifar-100 dataset", "venue": "online: http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2014 }, { "authors": [ "Yann LeCun" ], "title": "Learning process in an asymmetric threshold network", "venue": "In Disordered systems and biological organization,", "year": 1986 }, { "authors": [ "Daniel D Lee", "H Sebastian Seung" ], "title": "Learning the parts of objects by non-negative matrix factorization", "venue": null, "year": 1999 }, { "authors": [ "Qianli Liao", "Joel Z Leibo", "Tomaso Poggio" ], "title": "How important is weight symmetry in backpropagation", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Daniel Cownden", "Douglas B Tweed", "Colin J Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature communications,", "year": 2016 }, { "authors": [ "Theodore H Moskovitz", "Ashok Litwin-Kumar", "LF Abbott" ], "title": "Feedback alignment in deep convolutional networks", "venue": "arXiv preprint arXiv:1812.06488,", "year": 2018 }, { "authors": [ "Javier R Movellan" ], "title": "Contrastive hebbian learning in the continuous hopfield model", "venue": "In Connectionist models,", "year": 1991 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In International Conference on Machine Learning (ICML),", "year": 2010 }, { "authors": [ "Arild Nøkland" ], "title": "Direct feedback alignment provides learning in deep neural networks", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Martin Riedmiller" ], "title": "Rprop-description and implementation", "venue": "details. Citeseer,", "year": 1994 }, { "authors": [ "Martin Riedmiller", "Heinrich Braun" ], "title": "A direct adaptive method for faster backpropagation learning: The rprop algorithm", "venue": "In IEEE international conference on neural networks,", "year": 1993 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning representations by backpropagating", "venue": "errors. Nature,", "year": 1986 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Farial Shahnaz", "Michael W Berry", "V Paul Pauca", "Robert J Plemmons" ], "title": "Document clustering using nonnegative matrix factorization", "venue": "Information Processing & Management,", "year": 2006 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "George Trigeorgis", "Konstantinos Bousmalis", "Stefanos Zafeiriou", "Bjoern Schuller" ], "title": "A deep seminmf model for learning hidden representations", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Will Xiao", "Honglin Chen", "Qianli Liao", "Tomaso Poggio" ], "title": "Biologically-plausible learning algorithms can scale to large datasets", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Seungil You", "David Ding", "Kevin Canini", "Jan Pfeifer", "Maya Gupta" ], "title": "Deep lattice networks and partial monotonic functions", "venue": "In Conference on Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Jinshi Yu", "Guoxu Zhou", "Andrzej Cichocki", "Shengli Xie" ], "title": "Learning the hierarchical parts of objects by deep non-smooth nonnegative matrix factorization", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Zhijian Yuan", "Erkki Oja" ], "title": "Projective nonnegative matrix factorization for image compression and feature extraction", "venue": "In Scandinavian Conference on Image Analysis,", "year": 2005 }, { "authors": [ "Hongyi Zhang", "Yann N. Dauphin", "Tengyu Ma" ], "title": "Residual learning without normalization via better initialization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "R wi", "R N b" ], "title": "<∞ such that ‖g(x)− f(x)‖∞ < ε. In essence, the set of functions g(x) of the form given in (18) is dense in C(In)", "venue": null, "year": 1989 } ]
[ { "heading": "1 INTRODUCTION", "text": "A key factor enabling the successes of Deep Learning is the backpropagation of error (BP) algorithm (Rumelhart et al., 1986). Since it has been introduced, BP has sparked several discussions on whether physical brains are realizing BP-like learning or not (Grossberg, 1987; Crick, 1989). Today, most researchers consent that two distinct characteristics of BP render the idea of a BP based learning in brains as implausible: 1) The usage of symmetric forward and backward connections and 2) the strict separation of activity and error propagation (Bartunov et al., 2018). These two objections have lead researchers to search for more biologically motivated alternatives to BP.\nThe three most influential families of BP alternatives distilled so far are Contrastive Hebbian Learing (CHL) (Movellan, 1991), target-propagation (TP) (LeCun, 1986; Hinton, 2007; Bengio, 2014) and feedback Alignment (FA) (Lillicrap et al., 2016).\nThe idea of CHL is to propagate the target activities, instead of the errors, backward through the network. For this reason, a temporal dimension is added to the neuron activities. Each neuron then adapts its parameters based on the temporal differences of its ”forward” and ”backward” activity. The two significant critic points of CHL are the requirement for symmetric ”forward-backward” connections and the use of alternating ”forward” and ”backward” phases (Baldi & Pineda, 1991; Bartunov et al., 2018).\nTP shares the idea with Contrastive Hebbian Learning of propagating target activities instead of errors. However, rather than keeping symmetric forward and backward paths, the reciprocal propagation of the activities are realized through learned connections. Consequently, each layer has assigned two objectives: Learning the inverse of the layer’s forward function and minimizing the difference to the back-projected target activity. Variants of TP differ in how exactly the target activity is projected backward (LeCun, 1986; Bengio, 2014; Bartunov et al., 2018). Theoretical guarantees of TP rely on the assumption that each reciprocal connection implements the perfect inverse of the corresponding forward function. This issue of an imperfect inverse was also found to be the ”bottleneck” of TP in practice (Bartunov et al., 2018). When the output of a layer has a significant lower dimension than its input, reconstructing the input from the output becomes challenging, resulting in poor learning performance.\nFeedback alignment algorithms eliminate the weight sharing implausibility of BP by replacing the symmetric weights in the error backpropagation path by random matrices. The second objection, i.e., separate activity and error channels, is attenuated by Direct Feedback Alignment (Nøkland, 2016) which drastically reduces the number of channels carrying an error signal. While feedback alignment algorithms work well on small and medium-sized benchmarks, a recent study identified that they are unable to provide learning on more challenging datasets like ImageNet (Bartunov et al., 2018). Another criticism of FA algorithms is the lack of rigorous mathematical justification and convergence guarantees of the performed computations.\nIn this work, we investigate feed-forward networks where the weights of all, expect the first, layers are constrained to positive values. We prove that this constraint does not invalidate the universal approximation capabilities of neural networks. Next, we show that, in combination with monotonic activation functions, all layers from the second layer on realize monotonically increasing functions. The backpropagation of a scalar error signal through these layers only affects the magnitude of the error signal but does not change its sign. Consequently, we prove that algorithms that bypass the error backpropagation steps, such as Direct Feedback Alignment, can compute the sign of the true gradient with respect to the weights of our constraint networks without the need for backpropagation. Finally, we show that our algorithm, which we call monotone Direct Feedback Alignment, can deliver its theoretical promises in practice, by surpassing the learning performance of existing feedback alignment algorithms in binary classification task, i.e., when the error signal is scalar, and provide decent performance even when the error signal is not scalar.\nWe make the following key contributions:\n• First FA algorithm that has provable learning capabilities for non-linear networks of arbitrary depth\n• Experimental evaluation showing that our FA algorithm outperforms the learning performance of existing FA algorithms and match backpropagation in binary classification tasks\n• We make an efficient TensorFlow implementation of all tested algorithms publicly available1" }, { "heading": "2 BACKPROPAGATION AND FEEDBACK ALIGNMENT", "text": "We consider the feed-forward neural network\nhl(hl−1) := { f(Wlhl−1 + bl) if l < L Wlhl−1 + bl if l = L\nh0 := x Wl ∈ Rnl×nl−1 , bl ∈ Rnl\n(1)\nwhere f is the non-linear activation function, x the input and hL the output of the network. For classification tasks, hL is usually transformed into a probability distribution with discrete support by a sigmoid or softmax function.\nDuring training, the parametersWl, bl, l = 1, . . . L are adjusted to minimize a loss functionL(y, hL) on samples of a giving training distribution p(y, x). This is usually done by performing gradient descent\nθl ← θl − α dL dθ , α ∈ R+ (2)\nwith respect to the parameters θl ∈ {Wl, bl}, 1 ≤ l ≤ n of the network.\n1https://github.com/mlech26l/iclr_paper_mdfa\na) BP b) FA c) DFA d) mDFA y y y y" }, { "heading": "2.1 BACKPROPAGATION", "text": "Backpropagation (Rumelhart et al., 1986) is the primary method to compute the gradients needed by the updates in equation (2) by iteratively applying the chain-rule\ndL dθl = (dhl dθl )T dL dhl\n(3)\ndL dhl = (dhl+1 dhl )T dL dhl+1\n(4)\ndhl+1 dhl = Wl+1diag(f ′(Wl+1hl + bl+1)). (5)\nA graphical representation of how information first flow forward and then backward in BP through each layer is shown in Figure (1) a.\nTwo major concerns argue against the idea that biological neural networks are implementing BPbased learning. I) The weight matrix Wl of the forward path is reused in the backward path in the form of WTl (weight sharing), and II) the strict separation of activity carrying forward and error carrying backward connections (reciprocal error transport)." }, { "heading": "2.2 FEEDBACK ALIGNMENT ALGORITHMS", "text": "Feedback alignment addresses the implausibility of reusing WTl in the backward path by replacing WTl by a fixed random matrix Bl. Lillicrap et al. (2016) showed that this somewhat counterintuitive approach works remarkably well in practice. The term ”feedback alignment” originates from the observations made by Lillicrap et al. (2016) that the angle between the FA update vector and the true gradient starts to decrease, i.e., align, after a few epochs of the training algorithm. Theoretical groundwork on this alignment principle of FA relies on strong assumptions such as a linearized network with one hidden layer (Lillicrap et al., 2016).\nFA avoids any weight sharing but does not address the reciprocal error transport implausibility, due to its strict separation of forward and backward pathways, as shown in Figure (1) b. Direct Feedback Alignment (DFA) (Nøkland, 2016) relaxes this issue by replacing all backward paths with a direct feed from the output layer error-gradient dLdhL . Consequently, there is only a single error signal that is distributed across the entire network, which is arguably more biologically plausible than reciprocal error connections. The resulting parameter updates of DFA are of the form\nδθl :=\n{ dL dhL hL θl\nif l = L dL dhL Bl hl θl if l < L (6)\n, where Bl ∈ RnL×nl is a random matrix. A graphical schematic of DFA is shown in Figure (1) c. Similar to FA, DFA shows a decent learning performance in mid sized classification tasks (Nøkland, 2016), but fails on more complex datasets such as ImageNet (Bartunov et al., 2018). Theory on adapting the alignment principle to DFA shows that under the strong assumptions of constant DFA update directions and a layer-wise criterion minimization, the DFA update vector will align with the true gradient (Nøkland, 2016; Gilmer et al., 2017).\nRecently, Frenkel et al. (2019) proposed to combine ideas from feedback alignment and targetpropagation in their Direct Random Target Projection (DRTP) algorithm. While DRTP shows decent empirical performance, theoretical guarantees about DRTP rely on linearized networks." }, { "heading": "2.3 SIGN-SYMMETRY ALGORITHMS", "text": "Liao et al. (2016) introduced the sign-symmetry algorithm, a hybrid of BP and FA. Sign-symmetry locks the signs of the feedback weight Bl to have the same signs as WTl , but random absolute value. The authors showed that this approach drastically improves learning performance compared to standard FA. Furthermore, Moskovitz et al. (2018) and Xiao et al. (2019) demonstrated that the sign-symmetry algorithm is even able to match backpropagation for training deep network architectures and large datasets such as ImageNet.\nWhile these empirical observations suggest that the polarity of the error feedback is more important than its magnitude, the mathematical justification of sign-symmetry remains absent. Similar to FA, sign-symmetry relaxes the strict weight sharing implausibility, but still relies on an unrealistic reciprocal error transport." }, { "heading": "3 MONOTONE DIRECT FEEDBACK ALIGNMENT", "text": "In this section, we first introduce a new class of feed-forward networks, where all, except the first, layers are constrained to realize monotone functions. We call such networks mono-nets and show that they are as expressive as unconstrained feed-forward networks. Next, we prove that for our mono-nets with single output tasks, e.g., binary-classification, feedback alignment algorithms provide the sign of the gradient. The sign of the gradient is interesting for learning, as it tells us if the value of a parameter should be increased or decreased in order to reduce the loss. At the end of this section, we will highlight similarities to algorithms from literature, which can provide resilient learning by only relying on the sign of the gradient.\nNeural networks with monotonic constraints have been already studied in literature (You et al., 2017), however not in the context of learning algorithms. Definition 1 (mono-net). A mono-net is a feed-forward neural network with L layers h1, . . . hL, each layer l composted of nl units and the semantics\nhl(hl−1) := { f(Wlhl−1 + bl) if l < L Wlhl−1 + bl if l = L\n(7)\nh0 := x (8) W1 ∈ Rn1×n0 , (9) Wl ∈ R nl×nl−1 + , for l > 1 (10)\nbl ∈ Rnl (11) where R+ are the positive reals, i.e. R+ = {x ∈ R|x ≥ 0}, f is a non-linear monotonic increasing activation function, x the input and hL the output of the network.\nThe major difference between mono-nets and general feed-forward neural networks is the restriction to only positive weight values in layers from the second layer on. Combined with the monotonic increasing activation function, this means that each layer hl(hl−1), l ≥ 2 realizes a monotone increasing function. Because functional composition preserves monotonicity, the complete network up to the first layer hl ◦ hl−1 ◦ · · · ◦ h2(h1) (12) implements a monotone increasing function.\nmono-nets are Universal Approximators At first glance, this restriction seems counterproductive, as it might interfere with the expressiveness of the networks. However, we proof in Theorem 1 that our mono-nets with tangent hyperbolic activation are universal approximators, meaning that we can approximate any continuous function arbitrarily close. A potential drawback of the monotonicity constraint is that we might need a larger number of units in the hidden layers to achieve the same expressiveness as a general feed-forward network, as illustrated in our proof of Theorem 1. Theorem 1 (mono-nets are Universal Approximators). Let In be the n-dimensional unit hypercube [0, 1]n and C(In) denote the set of continuous functions f : In → R. We define ‖f‖∞ as the supremum norm of f ∈ C(In) over its domain In. For any given f ∈ C(In) and ε > 0, there exist a function m : In → R of the form\nm(x) := M∑ i=1 v̄i tanh(w̄i Tx+ ŵi T (−x) + b̄i) + c (13)\nwith v̄ ∈ R+M , w̄i ∈ R+n, ŵi ∈ R+n, b̄ ∈ RM , c ∈ R and M <∞ such that ‖m(x)− f(x)‖∞ < ε.\nIn essence, the set of functions m(x) of the form given in (19) is dense in C(In).\nProof. See supplementary materials 3.\nmDFA provides the sign of the gradient Here, we prove that for 1-dimensional outputs DFA applied on a mono-net, which we will call simply mDFA, provides the sign of the true gradient. Note that we focus our methods on DFA instead of ”vanilla” FA, due to the superiority of DFA in terms of biological plausibility and empirical performance (Nøkland, 2016; Bartunov et al., 2018). Theorem 2 (For 1-dimensional outputs mDFA computes the sign of the gradient). Let L(y, hL) be a loss function, m(x) := hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0(x) be a mono-net according to Definition 1 with parameters Θ := {Wl, bl\n∣∣l = 1, . . . L}. We denote δθ the update value computed by mDFA and ∇θ as the gradient ∂L∂θ for any θ ∈ {Wl, bl} with 1 ≤ l ≤ L. If nL = 1, it follows that(\nδθ ) i,j · ( ∇θ ) i,j ≥ 0, (14)\nfor each coordinate (i, j) of θ.\nProof. See supplementary materials.\nA graphical illustration of how activities and errors propagate in mDFA is shown in Figure (1) d.\nLiterature on learning by relying only on the sign of the gradient Two learning concepts related to mDFA are RPROP (Riedmiller & Braun, 1993; Riedmiller, 1994) and signSGD (Bernstein et al., 2018). RPROP aims to build a more resilient alternative to gradient descent by decoupling the amplitude of the gradient from the step size of parameter updates. In essence, for each coordinate RPROP adapts the step size based on the sign of the most recent gradients computed. Riedmiller & Braun (1993) showed that their approach could stabilize the training of a neural network compared to standard gradient descent.\nPerforming gradient descent with taking the sign of each gradient coordinate is on an algorithmic level equivalent to the well-known steepest descent method with L∞ norm (Boyd & Vandenberghe, 2004; Bernstein et al., 2018). signSGD (Bernstein et al., 2018) studies convergence properties of the stochastic approximation of this algorithm.\nWhat about networks with more than one output neuron? Theorem 2 applies only to networks with scalar output. As a natural consequence, one may ask whether such theoretical guarantees can be extended to more dimensional output variables. The simple answer is, unfortunately not. In the supplementary materials section A.3 we provide a counterexample showing that Theorem 2 naively extended to two output neurons does not hold anymore.\nWe want to note that the requirement of a neural network to have only a single output neuron is biologically unjustified. It is known that sub-circuits of biological neuronal networks can feed to multiple motor neuron groups Cook et al. (2019).\nHow does mDFA relate to the non-negative matrix factorization? A seemingly related concept to mDFA is the non-negative matrix factorization (NMF) algorithm. NMF decomposes an observation matrix V into a weight matrix W and a latent variable matrix H such that V ≈ WH . In contrast to other decomposition-based unsupervised learning methods, all three matrices V,W and H are restricted to non-negative entries. While NMF can model data that is inherently non-negative, such has semantic features of images and text, effectively Yuan & Oja (2005); Shahnaz et al. (2006), the method is unable to learn subtractive and non-linear structures that are present in the data Lee & Seung (1999).\nSemi-non-negative matrix factorization Ding et al. (2008) relaxes the original restriction to nonnegative observations of NMF, by only constraining the weight matrix W to be non-negative. Deep semi-NMF Trigeorgis et al. (2014) further enhances the expressiveness of NMF by adding multiple layers and non-linearities between them to the decomposition.\nConcerning this work, the semantics of mono-nets from the second layer on is equivalent to that of deep semi-NMF models. However, the unconstrained first layer of mono-nets provides universal approximation capabilities, enabling mono-nets to learn subtractive and non-monotonic input dependencies. Moreover, while deep NMF models are mostly trained via layer-wise learning in an unsupervised context Trigeorgis et al. (2014); Yu et al. (2018), the sole purpose of mono-nets is to investigate alternatives to backpropagation for training multi-layer classifiers." }, { "heading": "4 EXPERIMENTS", "text": "So far, we have only proven the learning capabilities of mDFA. What remains unclear is whether mDFA can deliver its theoretical promises in practice. In this section, we experimentally evaluate the learning performance of mDFA on a set of empirical benchmarks. We aim to answer the following two questions:\n• How well does mDFA perform compared to DFA, FA, and backpropagation in ”natural conditions,” i.e., in binary classification tasks, and\n• how much does the performance of mDFA degrade in multi-class classification tasks?\nOur performance measure is the achieved classification accuracy of a network trained by a particular method. First, we report the highest achieved accuracy on the training set, which tells us how well the algorithm could fit the model to the training data. Secondly, for each method, we tuned the hyperparameters on a separate validation set and selected the best performing configuration to be evaluated on the test data. The obtained test accuracy tells us how well the model generalizes to data outside the training set.\nWe evaluate fully-connected networks (FC) and Convolutional networks (CNNs) in form of modified all-convolutional-nets (Springenberg et al., 2015) with tangent hyperbolic, ReLU (Nair & Hinton, 2010), and the hard-tanh non-linearity. The hard-tanh function is defined as\nhard-tanh(x) := min(max(x,−1), 1). (15)\nHyperparameters For all training methods, we fixed the batch size to 64, applied no regularization, no normalization, and no data augmentation. Optimizer, i.e., {”Vanilla” Gradient Descent, Adam (Kingma & Ba, 2014), Rmsprop (Tieleman & Hinton, 2012) }, learning rate, training epochs, and weight initialization method were tuned on the validation set. We tested three different weight initialization schemes; all zeros, a scaled uniform, and a normal distribution. Note that all zeros was only tested on the forward weights. Our uniform initialization followed the methodology of Nøkland (2016), i.e., scaling the bounds of the distribution inversely by the square-root of the number of incoming connections of a neuron. In order to comply with the weight constraints of mono-nets, for mDFA the lower bound of the uniform distribution was set to ε = 10−3. Moreover, for mDFA we post-processed the initial weights of the normal distribution by taking their absolute values. Input variables are scaled to [0,1]. Detailed network architectures and a brief discussion about the best performing hyperparameter configurations are listed in the supplementary materials in section B.1 and section B.3." }, { "heading": "4.1 BINARY CLASSIFICATION", "text": "We created a series of binary classification benchmarks by randomly sampling two classes from the well-studied CIFAR-100 and ImageNet datasets. We then train and test a 1-vs-1 classifier on the samples of the two classes. For each dataset, we create five such ”binarized” sub-datasets and report the mean and standard deviation over these experiments. CIFAR-100 (Krizhevsky et al., 2014) poses a challenging classification dataset, consisting of 32-by-32 RGB images in 100 real-world object categories. ImageNet (Russakovsky et al., 2015) is a large-scale object classification benchmark and de-facto standard to asses new deep learning methods. Each sample is a high-resolution image representing one out of 1000 possible object classes. We pre-processed all samples by cropping and resizing each image to 224-by-224 pixels. Because ImageNet lacks a public test set, we will report the validation accuracy, i.e., as it was reported by Bartunov et al. (2018).\nThe results on the binarized benchmarks are shown in Table 1 for CIFAR-100 and Table 2 for ImageNet. mDFA could bring the training error to zero for fully-connected networks, and match the test/validation accuracy of backpropagation for convolutional networks.\nPoor learning performance for ReLU networks One surprising characteristic in our results is that mDFA fails to provide the same level of learning performance as backpropagation for ReLU networks. Recall that Theorem 1 proves the universal approximation capabilities only for mono-net with tanh activation function. A mono-net with ReLU non-linearity restricts both, the activation and the weight values, to positive values. These constraints arguably limit the approximation capabilities of mono-net in combination with ReLU and thus explain the poor performance of mDFA for ReLU networks. We could validate our hypothesis by testing a rectifier non-linearity that also contains negative values in its image. Coincidentally, the hard-tanh function matches exactly this criterion. Therefore, the decent learning performance of mDFA for hard-tanh networks confirms our hypothesis. Note that also FA and DFA expressed an improvement in performance when switching from ReLU to hard-tanh activation. This observation suggests that feedback alignment algorithms in general benefit from symmetric activation functions.\nDiscrepancy between fully-connected and convolutional networks Though mDFA achieved the same training accuracy as backpropagation for fully-connected networks, the generalization ability, i.e., test and validation accuracy, slightly lacks behind BP in our binarized-CIFAR-100 benchmark. This observation aligns with the studies by Nøkland (2016); Bartunov et al. (2018). For convolutional neural networks, this effect is reversed, i.e., a decent test/validation accuracy but a higher training error than BP. We speculate that the two tested initialization schemes for the feedback weights caused this discrepancy. Both backpropagation and feedback alignment have been shown to be sensitive to the employed initialization method (Zhang et al., 2019; Bartunov et al., 2018). The restriction to positive values of the weights used in mDFA, requires a re-thinking of the initialization methods examined in the literature. However, we will leave this study open for future work." }, { "heading": "4.2 MULTI-CLASS CLASSIFICATION", "text": "Here we modify the classification benchmark creation procedure used above, by randomly sampling n classes from the datasets instead of just two. Due to their compelling results in our binary classification benchmark, we restrict our evaluation to networks with tanh activation. Table 3 and Table 4 show the results on our n-class CIFAR-100 benchmark for fully-connected and CNNs respectively. The results on n-class ImageNet can be found in the supplementary materials in Table 5 and 6.\nmDFA can provide learning for networks with more than one output neuron Though we do not have a complete theory on mDFA for multi-dimensional outputs, our experiments indicate that mDFA can provide learning for networks with more than one output neuron. In particular, the achieved accuracies of mDFA outperforms other feedback alignment methods for convolutional networks with ten or fewer output neurons. However, mDFA falls behind standard DFA for networks with more than ten output neurons. This observation suggests that our restriction to positive weights can be beneficial even for multi-class tasks but eventually hurts learning performance when the number of classes grows larger.\nFeedback alignment algorithms, in general, tend to struggle with increasing output dimension Our results express the trend that with an increase in the number of classes, feedback alignment algorithms struggle to fit the training data. While BP can reduce the training error to almost zero independently of the output dimension, the training errors achieved by FA algorithms are significantly higher and correlate with the number of output neurons. Our observations suggest that the dimension of the error signal affects the training convergence for FA algorithms, in contrast to BP, which appears to be less affected by the dimension of the error signal. This relation potentially provides a step towards understanding why FA algorithms fail on challenging datasets as described by Bartunov et al. (2018)." }, { "heading": "5 CONCLUSION", "text": "Feedback alignment algorithms are promising biologically motivated alternatives to backpropagation. While existing literature provides empirical evidence that FA algorithms can work well in practice, there is still a lack of rigorous theory formalizing their learning capabilities. Here we contributed to the field of biologically motivated learning algorithms, by introducing the first feedback alignment algorithm that provable provides learning for non-linear networks of arbitrary depth and single output neuron. We showed that our FA algorithm outperforms existing FA algorithms in binary classification tasks, and even provide decent learning on multi-class problems.\nLimitations We demonstrated on empirical benchmarks as well as theoretical examples that our method is limited to networks with scalar output. Indeed, uncovering the mathematical principles behind the decent learning performance of FA and DFA on multi-class tasks remains an open challenge.\nIs this really useful? In terms of scientific significance, this work provided theoretical contributions toward understanding the capabilities and limits of feedback alignment algorithms. From a practical point of view, our mDFA algorithm has an advantage over backpropagation concerning training latency, as all weight updates can be computed in parallel, i.e., see Figure 1. Furthermore, we have shown that mDFA is superior to other feedback alignment algorithms in binary classification tasks. Consequently, mDFA provides an effective solution for binary classification problems with training latency constraints. Dynamic branch prediction in microprocessor pipelines poses such problem instance, where program specific binary branch outcomes, i.e., branch taken/not taken, need to be learned in real-time. Because of this real-time constraint, existing branch predictors often employ only shallow perceptron modules (Jiménez & Lin, 2002; Egan et al., 2003). mDFA could enable deeper branch predictor networks to be learned in real-time." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award)." }, { "heading": "A APPENDIX", "text": "A.1 MONO-NETS ARE UNIVERSAL APPROXIMATORS\nWe denote σ(x) :=\n1\n1 + exp(−x) the sigmoid function,\ntanh(x) := ex − e−x\nex + e−x\nthe hyperbolic tangent and\n1[expr] := { 1 if expr is true 0 if expr is false\nthe indicator function.\nFrom the definitions above we can derive the equalities\nσ(x) = 1\n2\n( 1 + tanh( x 2 ) )\n(16)\ntanh(x) = − tanh(−x) (17) , which we will need for our proof. Lemma 1 (Universal Approximation Theorem). For any given f ∈ C(In) and ε > 0, there exist a function g : In → R of the form\ng(x) := N∑ i=1 viσ(w T i x+ bi) (18)\nwith v ∈ RN , wi ∈ Rn, b ∈ RN and N <∞ such that\n‖g(x)− f(x)‖∞ < ε.\nIn essence, the set of functions g(x) of the form given in (18) is dense in C(In).\nProof. See Hornik et al. (1989)\nTheorem 3 (mono-nets are Universal Approximators). For any given f ∈ C(In) and ε > 0, there exist a function m : In → R of the form\nm(x) := M∑ i=1 vi tanh(w T i x+ bi) + c (19)\nwith v ∈ R+M , wi ∈ Rn, b̄ ∈ RM , c ∈ R and M <∞ such that\n‖m(x)− f(x)‖∞ < ε.\nIn essence, the set of functions m(x) of the form given in (19) is dense in C(In).\nProof. By Lemma 1 we know that there exist a g(x) with ‖g(x)− f(x)‖∞ < ε that has the form\ng(x) = N∑ i=1 viσ(w T i x+ bi)\nWe will show that we can reformulate g(x) to the form in equation (19). Our basic idea is to propagate all negative weight entries into to the first layer where negative weight values are allowed.\ng(x) = N∑ i=1 viσ(w T i x+ bi)\n= N∑ i=1 vi 1 2 ( 1 + tanh(wTi x+ bi) ) =\nN∑ i=1 vi 1 2 tanh ( n∑ j=1 wi,jxj + bi ) + 1 2 N∑ i=1 vi\n= N∑ i=1 1[vi ≥ 0]vi 1 2 tanh ( n∑ j=1 wi,jxj + bi )\n+ N∑ i=1 1[vi < 0]vi 1 2 tanh ( n∑ j=1 wi,jxj + bi )\n+ 1\n2 N∑ i=1 vi\n= N∑ i=1 1[vi ≥ 0]vi 1 2 tanh ( n∑ j=1 wi,jxj + bi )\n+ N∑ i=1 −1[vi < 0]vi 1 2 tanh ( − n∑ j=1 wi,jxj − bi )\n+ 1\n2 N∑ i=1 vi\n= N∑ i=1 v̄i tanh ( wi Tx+ bi ) +\nN∑ i=1 v̄′i tanh ( w̄i Tx+ b̄i ) + c\nwith\nv̄i = 1[vi ≥ 0]vi 1\n2 ≥ 0\nv̄′i = −1[vi < 0]vi 1\n2 ≥ 0\nw̄i = −wiT\nb̄i = −bi\nc = 1\n2 N∑ i=1 vi\nTherefore, we showed that there exist a m(x) with a form as in equation (19) that satisfy\n‖m(x)− f(x)‖∞ < ε.\nThis proof showed us how to translate any neural network with sigmoid activation function and one hidden layer of size N into a mono-net with 2N hidden units.\nA.2 FOR 1-DIMENSIONAL OUTPUTS MDFA COMPUTES THE SIGN OF THE GRADIENT\nLemma 2 (The gradient of a monotone layer is non-negative). Let m : Rn0 → R with m(x) := hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0(x) be a mono-net according to Definition 1. Then( dhl\ndhl−1\n) i,j ≥ 0, (20)\nfor any l, i, j with 2 ≤ l ≤ L and 1 ≤ i ≤ nl and 1 ≤ j ≤ nl−1\nProof. We have to distinguish two cases: Case 1: l = L, i.e. there is no activation function. We have ( dhl\ndhl−1\n) i,j = ( Wl ) i,j ≥ 0, (21)\naccording to the definition in Equation (10). Case 2: l < L, i.e. there is an activation function. We have ( dhl\ndhl−1\n) i,j = ( Wldiag(f ′(Wlhl−1 + bl)) ) i,j\n(22)\n= nl∑ k=1 ( Wl ) i,k ( diag(f ′(Wlhl−1 + bl)) ) k,j , (23)\nwhere f ′ is the derivative of the activation function. Because f is a monotonic function, its derivative is non-negative everywhere. As a result we have a sum of a product of non-negative values. Ergo( dhl\ndhl−1\n) i,j ≥ 0 (24)\nLemma 3 (The gradient of a composition of monotone layers is non-negative). Let m : Rn0 → R with m(x) := hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0(x) be a mono-net according to Definition 1. Then( dhl\ndhk\n) i,j ≥ 0, (25)\nfor any l, k, i, j with 2 ≤ l ≤ L and 1 ≤ k < l and 1 ≤ i ≤ nl and 1 ≤ j ≤ nk\nProof. By applying the chain rule we get( dhl dhk ) i,j = ( l∏ m=k+1 dhm dhm−1 ) i,j\n(26)\nAccording to Lemma 2 we have a product of non-negative values. Because a product of non-negative values is non-negative itself, we have ( dhl\ndhk\n) i,j ≥ 0 (27)\nTheorem 4 (For 1-dimensional outputs mDFA computes the sign of the gradient). Let L(y, hL) be a loss function, m(x) := hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0(x) be a mono-net according to Definition 1 with parameters Θ := {Wl, bl ∣∣l = 1, . . . L}. We denote δθ the update value computed by mDFA and ∇θ as the gradient ∂L∂θ for any θ ∈ {Wl, bl} with 1 ≤ l ≤ L. If nL = 1, it follows that(\nδθ ) i,j · ( ∇θ ) i,j ≥ 0, (28)\nfor each coordinate (i, j) of θ.\nProof. We distinguish two cases: Case 1: l = L, i.e. θ is a parameter of the last layer. From the definition of mDFA we have\nδθ := dL dhL dhL dθ . (29)\nFor the gradient by applying the chain rule we get\n∇θ = dL dθ = dL\ndhL\ndhL dθ\n(30)\nThus, in the last layer the mDFA update equals the gradient. Case 2: l < L, i.e θ is a parameter of a hidden layer. From the definition of mDFA we have(\nδθ ) i,j := ( dL dhL B dhl dθ ) i,j\n(31)\nwith B ∈ R+nL×nl . Next, we expand the multiplication,( δθ ) i,j = nL∑ k=1 ( dL dhL )kBk,i d(hl)i dθi,j\n(32)\n= d(hl)i dθi,j nL∑ k=1 ( dL dhL )kBk,i. (33)\nWe assumed that the output dimension is 1, i.e. nL = 1. Therefore,( δθ ) i,j = d(hl)i dθi,j ( dL dhL )1B1,i. (34)\nFor the gradient by applying the chain rule we get\n∇θ = dL dθ = dL\ndhL\ndhL dθ\n(35)\n= dL dhL dhL dhl dhl dθ\n(36)\nLike above, we expand the multiplication,( ∇θ ) i,j = nL∑ k=1 ( dL dhL )k( dhL dhl )k,i d(hl)i dθi,j\n(37)\n= d(hl)i dθi,j nL∑ k=1 ( dL dhL )k( dhL dhl )k,i. (38)\nWe assumed that the output dimension is 1, i.e. nL = 1. Therefore,( ∇θ ) i,j = d(hl)i dθi,j ( dL dhL )1( dhL dhl )1,i. (39)\nFor Equation (28) we get by applying Lemma 3,( δθ ) i,j · ( ∇θ ) i,j = (d(hl)i dθi,j ( dL dhL )1B1,i ) · (d(hl)i dθi,j ( dL dhL )1( dhL dhl )1,i ) (40)\n= (d(hl)i dθi,j ( dL dhL )1 )2 ︸ ︷︷ ︸\n≥0\nB1,i︸︷︷︸ ≥0 ( dhL dhl )1,i︸ ︷︷ ︸ ≥0\n(41)\n≥ 0. (42)\nA.3 FOR k ≥ 2-DIMENSIONAL OUTPUTS MDFA UPDATES MAY NOT ALIGN WITH THE GRADIENT\nTheorem 5 (For k ≥ 2-dimensional outputs mDFA updates may not align with the gradient). Let L(y, hL) be a loss function, m(x) := hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0(x) be a mono-net according to Definition 1 with parameters Θ := {Wl, bl ∣∣l = 1, . . . L}. We denote δθ the update value computed by mDFA and ∇θ as the gradient ∂L∂θ for any θ ∈ {Wl, bl} with 1 ≤ l < L. If nL ≥ 2, there exists the possibility that (\nδθ ) i,j · ( ∇θ ) i,j < 0, (43)\nfor at least one coordinate (i, j) of θ.\nProof. We construct a minimal counterexample consisting of a network with two outputs and one hidden layer of two neurons.\nLet\nW2 = ( 0 1 1 0 ) (44)\nB2 = ( 1 0 0 1 ) (45)\n∂L ∂hL = ( 1 0 0 −1 ) (46)\n∂h1 ∂θ = ( 1 0 0 1 ) (47)\n(48) Then we have\n∇θ = dL dhL WT2 dhl dθ\n(49)\n= ( 1 0 0 −1 )( 0 1 1 0 )( 0 1 1 0 ) (50)\n= ( −1 0 0 1 ) (51)\nand\nδθ = dL dhL B2 dhl dθ\n(52)\n= ( 1 0 0 −1 )( 1 0 0 1 )( 0 1 1 0 ) (53)\n= ( 1 0 0 −1 ) (54)\n(55) , which are orthogonal." }, { "heading": "B EXPERIMENT SETUP", "text": "B.1 NETWORK ARCHITECTURES\nDataset Fully-connected Convolutional CIFAR-100 1024,1024 (96,5,2),(96,3,2),(96,3,1) ImageNet 1024,1024,1024,1024 (96,3,2),(96,5,1),(128,3,2),\n(192,3,1),(192,3,2),(384,3,1)\nNetwork architectures, layers are separated by commas. Fully-connected column specifies the number of neurons of each layer. Convolutional column specifies the number of filters, kernel size,\nand stride for each layer.\nB.2 n-CLASS IMAGENET\nB.3 DISCUSSION ON HYPERPARAMETERS\nWe observed that all FA algorithms yield a more stable convergence with ”vanilla” stochastic gradient descent, i.e., no post-processing of the FA updates, than with Adam (Kingma & Ba, 2014) or Rmsprop (Tieleman & Hinton, 2012). This may be non-surprising as these acceleration methods have been developed for gradient-based optimization, whereas FA updates roughly align with the gradients at best.\nFurthermore, we observed that FA algorithms achieve a descent validation accuracy after the first few training epochs. However, in contrast to BP, these methods may require over a hundred training epochs to converge fully. This ”fast-start-slow-convergence” aligns with the observations made by Lillicrap et al. (2016).\nWe found that FA, DFA, and mDFA perform best when all forward weights are initialized to zero. Notable exceptions are networks with ReLU activation function which express poor performance with the all-zeros initialization scheme. This poor performance with all-zeros initialization partially explains the poor observed performance of FA and DFA for ReLU networks.\nIn contrast to Nøkland (2016), we observed that backward weights initialized by a normal distribution perform slightly better than the scaled uniform proposed by Nøkland (2016).\nWe confirmed the observation made by Bartunov et al. (2018), that feedback alignment algorithms are in general relatively sensitive to hyperparameter choice." } ]
2,020
LEARNING REPRESENTATIONS FOR BINARY- CLASSIFICATION WITHOUT BACKPROPAGATION
SP:0cfa52672cf34ffafece1171e48d6c344645dcf3
[ "This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned policy. It also shows how this quantization leads to a reduced memory cost and faster training and inference times.", "Training and deployment of DRL models is expensive. Quantization has proven useful in supervised learning, however it is yet to be tested thoroughly in DRL. This paper investigates whether quantization can be applied in DRL towards better resource usage (compute, energy) without harming the model quality. Both quantization-aware training (via fake quantization) and post-training quantization is investigated. The work demonstrates that policies can be reduced to 6-8 bits without quality loss. The paper indicates that quantization can indeed lower resource consumption without quality decline in realistic DRL tasks and for various algorithms." ]
Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy.
[]
[ { "authors": [ "Simon Alford", "Ryan Robinett", "Lauren Milechin", "Jeremy Kepner" ], "title": "Pruned and Structurally Sparse Neural Networks", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Kai Arulkumaran", "Marc Peter Deisenroth", "Miles Brundage", "Anil Anthony Bharath" ], "title": "A Brief Survey of Deep Reinforcement Learning", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert A. Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "CoRR, abs/1207.4708,", "year": 2012 }, { "authors": [ "C.M. Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural Computation,", "year": 1995 }, { "authors": [ "Greg Brockman", "Vicki Cheung", "Ludwig Pettersson", "Jonas Schneider", "John Schulman", "Jie Tang", "Wojciech Zaremba" ], "title": "URL http://arxiv.org/ abs/1606.01540", "venue": "Openai gym. CoRR,", "year": 2016 }, { "authors": [ "Guobin Chen", "Choi Wongun", "Xiang Yu", "Tony Han", "Manmohan Chandraker" ], "title": "Learning efficient object detection models with knowledge distillation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Pact: Parameterized clipping activation for quantized neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Matthieu Courbariaux", "Itay Hubara", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Davide Falanga", "Suseong Kim", "Davide Scaramuzza" ], "title": "How fast is too fast? the role of perception latency in high-speed sense and avoid", "venue": "IEEE Robotics and Automation Letters,", "year": 2019 }, { "authors": [ "Jesse Farebrother", "Marlos C. Machado", "Michael Bowling" ], "title": "Generalization and regularization in dqn", "venue": null, "year": 2018 }, { "authors": [ "S. Han", "X. Liu", "H. Mao", "J. Pu", "A. Pedram", "M.A. Horowitz", "W.J. Dally" ], "title": "Eie: Efficient inference engine on compressed deep neural network", "venue": "In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA),", "year": 2016 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Xingyu Liu", "Huizi Mao", "Jing Pu", "Ardavan Pedram", "Mark A Horowitz", "William J Dally" ], "title": "Eie: efficient inference engine on compressed deep neural network", "venue": "In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA),", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Kazutoshi Hirose", "Ryota Uematsu", "Kota Ando", "Kodai Ueyoshi", "Masayuki Ikebe", "Tetsuya Asai", "Masato Motomura", "Shinya Takamaeda-Yamazaki" ], "title": "Quantization error-based regularization for hardware-aware neural network training", "venue": "Nonlinear Theory and Its Applications, IEICE,", "year": 2018 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2016 }, { "authors": [ "Alex Kendall", "Jeffrey Hawke", "David Janz", "Przemyslaw Mazur", "Daniele Reda", "John-Mark Allen", "Vinh-Dieu Lam", "Alex Bewley", "Amar Shah" ], "title": "Learning to drive in a day", "venue": "CoRR, abs/1807.00412,", "year": 2018 }, { "authors": [ "Srivatsan Krishnan", "Behzad Boroujerdian", "William Fu", "Aleksandra Faust", "Vijay Janapa Reddi" ], "title": "Air learning: An AI research platform for algorithm-hardware benchmarking of autonomous aerial robots", "venue": "URL http://arxiv.org/abs/1906.00421", "year": 1906 }, { "authors": [ "Jan Kukacka", "Vladimir Golkov", "Daniel Cremers" ], "title": "Regularization for deep learning: A taxonomy", "venue": null, "year": 2017 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexand er Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Defensive quantization: When efficiency meets robustness", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory F. Diamos", "Erich Elsen", "David Garcı́a", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh", "Hao Wu" ], "title": "Mixed precision training", "venue": "CoRR, abs/1710.03740,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Adrià Puigdomènech Badia", "Mehdi Mirza", "Alex Graves", "Timothy P. Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "CoRR, abs/1602.01783,", "year": 2016 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning Convolutional Neural Networks for Resource Efficient Inference", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Jongsoo Park", "Sheng R. Li", "Wei Wen", "Hai Li", "Yiran Chen", "Pradeep Dubey" ], "title": "Holistic sparsecnn: Forging the trident of accuracy, speed, and size", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Antonio Polino", "Razvan Pascanu", "Dan Alistarh" ], "title": "Model compression via distillation and quantization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Mengye Ren", "Andrei Pokrovsky", "Bin Yang", "Raquel Urtasun" ], "title": "SBNet: Sparse Blocks Network for Fast Inference", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Charbel Sakr", "Naresh R. Shanbhag" ], "title": "Per-tensor fixed-point quantization of the back-propagation algorithm", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Christopher J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Fangyi Zhang", "Jürgen Leitner", "Michael Milford", "Ben Upcroft", "Peter Corke" ], "title": "Towards VisionBased Deep Reinforcement Learning for Robotic Motion Control", "venue": "In Australasian Conference on Robotics and Automation (ACRA),", "year": 2015 }, { "authors": [ "Shuchang Zhou", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuxin Wu", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "CoRR, abs/1606.06160,", "year": 2016 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "arXiv preprint arXiv:1612.01064,", "year": 2016 } ]
[ { "heading": null, "text": "Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question. To address this void, we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands. We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks (such as Pong, Breakout, BeamRider and more) and training algorithms (such as PPO, A2C, DDPG, and DQN). Across this spectrum of tasks and learning algorithms, we show that policies can be quantized to 6-8 bits of precision without loss of accuracy. We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline. Additionally, we show that quantization aware training, like traditional regularizers, regularize models by increasing exploration during the training process. Finally, we demonstrate usefulness of quantization for reinforcement learning. We use half-precision training to train a Pong model 50% faster, and we deploy a quantized reinforcement learning based navigation policy to an embedded system, achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy." }, { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning has promise in many applications, ranging from game playing (Silver et al., 2016; 2017; Kempka et al., 2016) to robotics (Lillicrap et al., 2015; Zhang et al., 2015) to locomotion and transportation (Arulkumaran et al., 2017; Kendall et al., 2018). However, the training and deployment of reinforcement learning models remain challenging. Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training. Deploying deep reinforcement learning (DRL) models is prohibitively expensive, if not even impossible, due to the resource constraints on embedded computing systems typically used for applications, such as robotics and drone navigation.\nQuantization can be helpful in substantially reducing the memory, compute, and energy usage of deep learning models without significantly harming their quality (Han et al., 2015; Zhou et al., 2016; Han et al., 2016). However, it is unknown whether the same techniques carry over to reinforcement learning. Unlike models in supervised learning, the quality of a reinforcement learning policy depends on how effective it is in sequential decision making. Specifically, an agent’s current input and decision heavily affect its future state and future actions; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies. Also, there are many different algorithms to train a reinforcement learning policy. Algorithms like actor-critic methods (A2C), deep-q networks (DQN), proximal policy optimization (PPO) and deep deterministic policy gradients (DDPG) are significantly different in their optimization goals and implementation details, and it is unclear whether quantization would be similarly effective across these algorithms. Finally, reinforcement learning policies are trained and applied to a wide range of environments, and it is unclear how quantization affects performance in tasks of differing complexity.\nHere, we aim to understand quantization effects on deep reinforcement learning policies. We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks, conducting in excess of 350 experiments to present representative and conclusive analysis. We perform experiments over 3 major axes: (1) environments (Atari Arcade, PyBullet, OpenAI Gym), (2) reinforcement learning training algorithms (Deep-Q Networks, Advantage Actor-Critic, Deep Deterministic Policy Gradients, Proximal Policy Optimization) and (3) quantization methods (post-training quantization, quantization aware training).\nWe show that quantization induces a regularization effect by increasing exploration during training. This motivates the use of quantization aware training, which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline. Additionally, We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality. Furthermore, we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization. Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models’ weight distribution, yielding higher quantization error. To demonstrate the usefulness of quantization for deep reinforcement learning, we 1) use half precision ops to train a Pong model 50% faster than full precision training and 2) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy." }, { "heading": "2 RELATED WORK", "text": "Reducing neural network resource requirements is an active research topic. Techniques include quantization (Han et al., 2015; 2016; Zhu et al., 2016; Jacob et al., 2018; Lin et al., 2019; Polino et al., 2018; Sakr & Shanbhag, 2018), deep compression (Han et al., 2016), knowledge distillation (Hinton et al., 2015; Chen et al., 2017), sparsification (Han et al., 2016; Alford et al., 2018; Park et al., 2016; Louizos et al., 2018b; Bellec et al., 2017) and pruning (Alford et al., 2018; Molchanov et al., 2016; Li et al., 2016). These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations. We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study.\nCompression for Memory and Storage: Techniques such as quantization, pruning, sparsification, and distillation reduce the amount of storage and memory required by deep neural networks. These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments (e.g., IoT or mobile). Broadly, quantization reduces the precision of network weights (Han et al., 2015; 2016; Zhu et al., 2016), pruning removes various layers and filters of a network (Alford et al., 2018; Molchanov et al., 2016), sparsification zeros out selective network values (Molchanov et al., 2016; Alford et al., 2018) and distillation compresses an ensemble of networks into one (Hinton et al., 2015; Chen et al., 2017). Various algorithms combining these core techniques have been proposed. For example, Deep Compression (Han et al., 2015) demonstrated that a combination of weight-sharing, pruning, and quantization might reduce storage requirements by 35-49x. Importantly, these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks.\nFast and Efficient Inference/Training: Methods like quantization, pruning, and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption. Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment, for example, a ”binary” GEMM (general matrix multiply) operation (Rastegari et al., 2016; Courbariaux et al., 2016). Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions (Molchanov et al., 2016). Finally, Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply (Ren et al., 2018). These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations.\nQuantization for Reinforcement Learning: Prior work in quantization focuses mostly on quantizing image / supervised models. However, there are several key differences between these models and reinforcement learning policies: an agent’s current input and decision affects its future state and actions, there are many complex algorithms (e.g: DQN, PPO, A2C, DDPG) for training, and there are many diverse tasks. To the best of our knowledge, this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms." }, { "heading": "3 QUANTIZED REINFORCEMENT LEARNING (QUARL)", "text": "We develop QuaRL, an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models. We use the QuaRL framework to 1) evaluate how effective quantization is at compressing reinforcement learning policies, 2) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3) establish a standard on the performance of quantization techniques across various training algorithms and environments.\nEnvironments: We evaluate quantized models on three different types of environments: OpenAI gym (Brockman et al., 2016), Atari Arcade Learning (Bellemare et al., 2012), and PyBullet (which is an open-source implementation of the MuJoCo). These environments consist of a variety of tasks, including CartPole, MountainCar, LunarLandar, Atari Games, Humanoid, etc. The complete list of environments used in the QuaRL framework is listed in Table 1. Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks.\nTraining Algorithms: We study quantization on four popular reinforcement learning algorithms, namely Advantage Actor-Critic (A2C) (Mnih et al., 2016), Deep Q-Network (DQN) (Mnih et al., 2013), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms.\nQuantization Methods: We apply standard quantization techniques to deep reinforcement learning models. Our main approaches are post-training quantization and quantization aware training. We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance. We describe how these methods are applied in the context of reinforcement learning below." }, { "heading": "3.1 POST-TRAINING QUANTIZATION", "text": "Post-training quantization takes a trained full precision model (32-bit floating point) and quantizes its weights to lower precision values. We quantize weights down to fp16 (16-bit floating point) and int8 (8-bit integer) values. fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization.\nFp16 Quantization: Fp16 quantization involves taking full precision (32-bit) values and mapping them to the nearest representable 16-bit float. The IEEE-754 standard specifies 16-bit floats with the format shown below. Bits are grouped to specify the value of the sign (S), fraction (F ) and exponent (E) which are then combined with the following formula to yield the effective value of the float:\nSign Exponent (5 bits) Fraction (10 bits)\nVfp16 = (−1)S × (1 + F\n210 )× 2E−15\nIn subsequent sections, we refer to float16 quantization using the following notation:\nQfp16(W ) = roundfp16(W )\nUniform Affine Quantization: Uniform affine quantization (TensorFlow, 2018b) is applied to a full precision weight matrix and is performed by 1) calculating the minimum and maximum values of the matrix and 2) dividing this range equally into 2n representable values (where n is the number of bits being quantized to). As each representable value is equally spaced across this range, the quantized value can be represented by an integer. More specifically, quantization from full precision to n-bit integers is given by:\nQn(W ) =\n⌊ W\nδ\n⌋ + z where δ =\n|min(W, 0)|+ |max(W, 0)| 2n\n, z = ⌊ −min(W, 0)\nδ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable. Further note that we usemin(W, 0) andmax(W, 0) to ensure that 0 is always represented. To dequantize we perform:\nD(Wq, δ, z) = δ(Wq − z)\nIn the context of QuaRL, int8 and fp16 quantization are applied after training a full precision model on an environment, as per Algorithm 1. In post training quantization, uniform quantization is applied to each fully connected layer of the model (per-tensor quantization) and is applied to each channel of convolution weights (per-axis quantization); activations are not quantized. We use post-training quantization to quantize to fp16 and int8 values.\nAlgorithm 1: Post-Training Quantization for Reinforcement Learning Input: T : task or environment Input: L : reinforcement learning algorithm Input: A : model architecture Input: n : quantize bits (8 or 16) Output: Reward\n1 M = Train(T , L, A)\n2 Q =\n{ Qint8 n = 8\nQfp16 n = 16 3 return Eval(Q(M))\nAlgorithm 2: Quantization Aware Training for Reinforcement Learning Output: Reward Input: T : task or environment Input: L : reinforcement learning algorithm Input: n : quantize bits Input: A : model architecture Input: Qd : quantization delay\n1 Aq = InsertAfterWeightsAndActivations(Qtrainn ) 2 M , TensorMinMaxes =\nTrainNoQuantMonitorWeightsActivationsRanges(T , L, Aq , Qd) 3 M = TrainWithQuantization(T , L, M , TensorMinMaxes, Qtrainn ) 4 return Eval(M , Qtrainn , TensorMinMaxes)" }, { "heading": "3.2 QUANTIZATION AWARE TRAINING", "text": "Quantization aware training involves retraining the reinforcement learning policies with weights and activations uniformly quantized to n bit values. Importantly, weights are maintained in full fp32 precision except that they are passed through the uniform quantization function before being used in the forward pass. Because of this, the technique is also known as “fake quantization” (TensorFlow, 2018b). Additionally, to improve training there is an additional parameter, quantization delay (TensorFlow, 2018a), which specifies the number of full precision training steps before enabling quantization. When the number of steps is less than the quantization delay parameter, the minimum and maximum values of weights and activations are actively monitored. Afterwards, the previously\ncaptured minimum and maximum values are used to quantize the tensors (these values remain static from then on). Specifically:\nQtrainn (W,Vmin, Vmax) =\n⌊ W\nδ\n⌋ + z where δ =\n|Vmin|+ |Vmax| 2n , z = ⌊ −Vmin δ ⌋ Where Vmin and Vmax are the monitored minimum and maximum values of the tensor (expanding Vmin and Vmax to include 0 if necessary). Intuitively, the expectation is that the training process eventually learns to account for the quantization error, yielding a higher performing quantized model. Note that uniform quantization is applied to fully connected weights in the model (per-tensor quantization) and to each channel for convolution weights (per-axis quantization). n bit quantization is applied to each layer’s weights and activations:\nxk+1 = A(Q train n (Wk, Vmin, Vmax)ak + b) where A is the activation function\nak+1 = Q train n (xk+1, Vmin, Vmax)\nDuring backward propagation, the gradient is passed through the quantization function unchanged (also known as the straight-through estimator (Hinton, 2012)), and the full precision weight matrix W is optimized as follows:\n∆WQ train n (W,Vmin, Vmax) = I\nIn context of the QuaRL framework, the policy neural network is retrained from scratch after inserting the quantization functions between weights and activations (all else being equal). At evaluation full precision weights are passed through the uniform affine quantizer to simulate quantization error during inference. Algorithm 2 describes how quantization aware training is applied in QuaRL." }, { "heading": "4 RESULTS", "text": "In this section, we first show that quantization has regularization effect on reinforcement learning algorithms and can boost exploration. Secondly, We show that reinforcement learning algorithms can be quantized safely without significantly affecting the rewards. To that end, we perform evaluations across the three principal axes of QuaRL: environments, training algorithms, and quantization methods.For post-training quantization, we evaluate each policy for 100 episodes and average the rewards. For Quantization Aware Training (QAT), we train atleast three policies and report the mean rewards over hundred evaluations. Table 1 lists the space of the evaluations explored.\nQuantization as Regularization: To further establish the effects of quantization during training, we compare quantization-aware training with traditional regularization techniques (specifically layer-norm (Ba et al., 2016; Kukacka et al., 2017)) and measure the amount of exploration these techniques induce. It has been show in previous literature (Farebrother et al., 2018; Cobbe et al., 2018) that regularization actively helps reinforcement learning training generalize better; here we further reinforce this notion and additionally establish a relationship between quantization, generalization and exploration. We use the variance in action distribution produced by the model as a proxy for exploration: intuitively, since the policy samples from this distribution when performing an action, a policy that produces an action distribution with high variance is less likely to explore different states. Conversely, a low variance action distribution indicates high exploration as the policy is more likely to take a different action than the highest scoring one.\nWe measure the variance in action distribution produced by differently trained models (QAT2, QAT-4, QAT-6, QAT-8, with layer norm and full precision) at different stages of the training process. We collect model rewards and the action distribution variance over several rollouts with deterministic action selection (model performs the highest scoring action). Importantly, we make sure to use deterministic action selection to ensure that the states reached are similar to the the distribution seen by the model during training. To separate signal from noise, we furthermore smooth the action variances with a smoothing factor of .95 for both rewards and action variances.\nFigure 4 shows the variance in action distribution produced by the models at different stages of training. Training with higher quantization levels (e.g: 2 bit vs 4 bit training), like layer norm regularization, induces lower action distribution variance and hence indicates more exploration. Furthermore, figure 4 reward plot shows that despite lower action variance, models trained with quantization achieve a reward similar to the full precision baseline, which indicates that higher exploration is facilitated by quantization and not by a lack of training. Note that quantization is turned on at 5,000,000 steps and we see its effects on the action distribution variance shortly after this point. In summary, data shows that training with quantization, like traditional regularization, in part regularizes reinforcement learning training by facilitating exploration during the training process.\nEffectiveness of Quantization: To evaluate the overall effectiveness of quantization for deep reinforcement learning, we apply post-training quantization and quantization aware learning to a spectrum of tasks and record their performance. We present the reward results for post-training quantization in Table 2. We also compute the percentage error of the performance of the quantized policy relative to that of their corresponding full precision baselines (Efp16 and Eint8). Additionally, we report the mean of the errors across tasks for each of the training algorithms.\nThe absolute mean of 8-bit and 16-bit relative errors ranges between 2% and 5% (with the exception of DQN), which indicates that models may be quantized to 8/16 bit precision without much loss in quality. Interestingly, the overall performance difference between the 8-bit and 16-bit post-training quantization is minimal (with the exception of the DQN algorithm, for reasons we explain in Section 4). We believe this is because the policies weight distribution is narrow enough that 8 bits is able to capture the distribution of weights without much error. In a few cases, post-training quantization yields better scores than the full precision policy. We believe that quantization injected an amount of noise that was small enough to maintain a good policy and large enough to regularize model behavior; this supports some of the results seen by Louizos et al. (2018a); Bishop (1995); Hirose et al. (2018); see appendix for plots showing that there is a sweet spot for post-training quantization.\nFor quantization aware training, we train the policy with fake-quantization operations while maintaining the same model and hyperparameters (see Appendix). Figure 2 shows the results of quantization aware training on multiple environments and training algorithms to compress the policies down from 8-bits to 2-bits. Generally, the performance relative to the full precision baseline is maintained until 5/6-bit quantization, after which there is a drop in performance. Broadly, at 8-bits, we see no degradation in performance. From the data, we see that quantization aware training achieves higher rewards than post-training quantization and also sometimes outperforms the full precision baseline.\nEffect of Environment on Quantization Quality: To analyze the task’s effect on quantization quality we plot the distribution of weights of full precision models trained in three environments (Breakout, Beamrider and Pong) and their error after applying 8-bit post-training quantization on them. Each model uses the same network architecture, is trained using the same algorithm (DQN) with the same hyperparameters (see Appendix).\nFigure 3 shows that the task with the highest error (Breakout) has the widest weight distribution, the task with the second-highest error (BeamRider) has a narrower weight distribution, and the task with the lowest error (Pong) has the narrowest distribution. With an affine quantizer, quantizing a narrower distribution yields less error because the distribution can be captured at a fine granularity; conversely, a wider distribution requires larger gaps between representable numbers and thus increases quantization error. The trends indicate the environment affects models’ weight distribution spread which affects quantization performance: specifically, environ-\nments that yield a wider distribution of model weights are more difficult to apply quantization to. This observation suggests that regularizing the training process may yield better performance.\nAlgorithm Environment fp32 Reward Eint8 Efp16 DQN Breakout 214 63.55% -1.40% PPO Breakout 400 8.00% 0.00% A2C Breakout 379 7.65% 2.11%\nTable 3: Rewards for DQN, PPO, and A2C.\n-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0\nweight\nFr eq\nue nc\ny\n105 103 101 Min Weight: -2.21 Max Weight: 1.31\nMin Weight: -1.02 Max Weight: 0.58\nMin Weight: -0.79 Max Weight: 0.72\nDQN\nPPO\nA2C\n105 103 101\n105 103 101\nFr eq\nue nc y Fr eq ue nc y\n-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0\n-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0\nFigure 4: Weight distributions for the policies trained using DQN, PPO and A2C. DQN policy weights are more spread out and more difficult to cover effectively by 8-bit quantization (yellow lines). This explains the higher quantization error for DQN in Table 3. A negative error indicates that the quantized model outperformed the full precision baseline.\nEffect of Training Algorithm on Quantization Quality: To determine the effects of the reinforcement learning training algorithm on the performance of quantized models, we compare the performance of post-training quantized models trained by various algorithms. Table 3 shows the error of different reinforcement learning algorithms and their corresponding 8-bit post-training quantization error for the Atari Breakout game. Results indicate that the A2C training algorithm is most conducive to int8 post-training quantization, followed by PPO2 and DQN. Interestingly, we see a sharp performance drop compared to the corresponding full precision baseline when applying 8-bit post-training quantization to models trained by DQN. At 8 bits, models trained by PPO2 and A2C have relative errors of 8% and 7.65%, whereas the model trained by DQN has an error of ∼64%. To understand this phenomenon, we plot the distribution of model weights trained by each algorithm, shown in Figure 4. The plot shows that the weight distribution of the model trained by DQN is significantly wider than those trained by PPO2 and A2C. A wider distribution of weights indicates a higher quantization error, which explains the large error of the 8-bit quantized DQN model. This also explains why using more bits (fp16) is more effective for the model trained by DQN (which reduces error relative to the full precision baseline from ∼64% down to ∼-1.4%). These results signify that the choice of RL algorithms (on-policy vs off-policy) have different objective functions and hence can result in a completely different weight distribution. A wider distribution has more pronounced impact on the quantization error." }, { "heading": "5 CASE STUDIES", "text": "To show the usefulness of our results, we use quantization to optimize the training and deployment of reinforcement learning policies. We 1) train a pong model 1.5× faster by using mixed precision optimization and 2) deploy a quantized robot navigation model onto a resource constrained embedded system (RasPi-3b), demonstrating 4× reduction in memory and an 18× speedup in inference. Faster training time means running more experiments for the same time. Achieving speedup on resource-constrained devices enables deployment of the policies on real robots.\nMixed/Half-Precision Training: Motivated by that reinforcement learning training is robust to quantization error, we train three policies of increasing model complexity (Policy A, Policy B, and Policy C) using mixed precision training and compare its performance to that of full precision training (see Appendix for details). In mixed precision training, the policy weights, activations, and gradients are represented in fp16. A master copy of the weights are stored in full precision (fp32) and updates are made to it during backward pass (Micikevicius et al., 2017). We measure the runtime and convergence rate of both full precision and mixed precision training (see Appendix).\nAlgorithm NetworkParameter fp32\nRuntime (min)\nMP Runtime\n(min) Speedup\nDQN-Pong Policy A 127 156 0.87× Policy B 179 172 1.04× Policy C 391 242 1.61×\nTable 4: Mixed precision training for reinforcement learning.\n0 200k 400k 600k 800k 1M\n20 fD 10 F 0 fD -10 fD -20\nPolicy A Policy B Policy C\nMixed Precision Fp32 Only\nstep\nRe w\nar d\n20 fD 10 F 0 fD -10 fD -20\n20 fD 10 F 0 fD -10 fD -20 Mixed Precision Fp32 Only\nMixed Precision Fp32 Only\n0 200k 400k 600k 800k 1M step 0 200k 400k 600k 800k 1M step\nFigure 5: Mixed precision v/s fp32 training rewards.\nFigure 5 shows that all three policies converge under full precision and mixed precision training. Interestingly, for Policy B, training with mixed precision yields faster convergence; we believe that\nsome amount of quantization error speeds up the training process. Table 5 shows the computational speedup to the training loop by using mixed precision training. While using mixed precision training on smaller networks (Policy A) may slow down training iterations (as overhead of doing fp32 to fp16 conversions outweigh the speedup of low precision ops), larger networks (Policy C) show up to a 60% speedup. Generally, our results show that mixed precision may speed up the training process by up to 1.6× without harming convergence. Quantized Policy for Deployment: To show the benefits of quantization in deploying of reinforcement learning policies, we train multiple point-to-point navigation models (Policy I, II, and III) for aerial robots using Air Learning (Krishnan et al., 2019) and deploy them onto a RasPi-3b, a cost effective, general-purpose embedded processor. RasPi-3b is used as proxy for the compute platform for the aerial robot. Other platforms on aerial robots have similar characteristics. For each of these policies, we report the accuracies and inference speedups attained by the int8 and fp32 policies.\nTable 5 shows the accuracies and inference speedups attained for each corresponding quantized policy. We see that quantizing smaller policies (Policy I) yield moderate inference speedups (1.18× for Policy I), while quantizing larger models (Policies II, III) can speed up inference by up to 18×. This speed up in policy III execution times results in speeding-up the generation of the hardware actuation commands from 5 Hz (fp32) to 90 Hz (int8). Note that in this experiment we quantize both weights and activations to 8-bit integers; quantized models exhibit a larger loss in accuracy as activations are more difficult to quantize without some form of calibration to determine the range to quantize activation values to (Choi et al., 2018).\nA deeper investigation shows that Policies II and III take more memory than the total RAM capacity of the RasPi-3b, causing numerous accesses to swap memory (refer to Appendix) during inference (which is extremely slow). Quantizing these policies allow them to fit into the RasPi’s RAM, eliminating accesses to swap and boosting performance by an order of magnitude. Figure 5 shows the memory usage while executing the quantized and unquantized version of Policy III, and shows how without quantization memory usage skyrockets above the total RAM capacity of the board.\nIn context of real-world deployment of an aerial (or any other type of) robot, a speedup in policy execution potentially translates to faster actuation commands to the aerial robot – which in turn implies faster and better responsiveness in a highly dynamic environment (Falanga et al., 2019). Our case study demonstrates how quantization can facilitate the deployment of a accurate policies trained using reinforcement learning onto a resource constrained platform." }, { "heading": "6 CONCLUSION", "text": "We perform the first study of quantization effects on deep reinforcement learning using QuaRL, a software framework to benchmark and analyze the effects of quantization on various reinforcement learning tasks and algorithms. We analyze the performance in terms of rewards for post-training quantization and quantization aware training as applied to multiple reinforcement learning tasks and algorithms with the high level goal of reducing policies’ resource requirements for efficient training and deployment. We broadly demonstrate that reinforcement learning models may be quantized down to 8/16 bits without loss of performance. Also, we link quantization performance to the distribution of models’ weights, demonstrating that some reinforcement learning algorithms and tasks are more difficult to quantize due to their effect of widening the models’ weight distribution. Additionally, we show that quantization during training acts as a regularizer which improve exploration. Finally, we apply our results to optimize the training and inference of reinforcement learning models, demonstrating a 50% training speedup for Pong using mixed precision optimization and up to a 18x inference speedup on a RasPi by quantizing a navigation policy. In summary, our findings\nindicate that there is much potential for the future of quantization of deep reinforcement learning policies." }, { "heading": "A POST TRAINING QUANTIZATION RESULTS", "text": "Here we tabulate the post training quantization results listed in Table 2 into four separate tables for clarity. Each table corresponds to post training quantization results for a specific algorithm. Table 5 tabulates the post training quantization for A2C algorithm. Likewise, Table 6 tabulates the post training quantization results for DQN. Table 7 and Table 8 lists the post training quantization results for PPO and DDPG algorithms respectively." }, { "heading": "B DQN HYPERPARAMETERS FOR ATARI", "text": "For all Atari games in the results section we use a standard 3 Layer Conv (128) + 128 FC. Hyperparameters are listed in Table 9. We use stable-baselines (Hill et al., 2018) for all the reinforcement learning experiments. We use Tensorflow version 1.14 as the machine learning backend." }, { "heading": "C MIXED PRECISION HYPERPARAMETERS", "text": "In mixed precision training, we used three policies namely Policy A, Policy B and Policy C respectively. The policy architecture for these policies are tabulated in Table 10. For measuring the runtimes for fp32 adn fp16 training, we use the time Linux command for each run and add the usr and sys times to measure the runtimes for both mixed-precision training and fp32 training. The hyperparameters used for training DQN-Pong agent is listed in Table 9." }, { "heading": "D QUANTIZED POLICY DEPLOYMENT", "text": "Here we describe the methodology used to train a point to point navigation policy in Air Learning and deploy it on an embedded compute platform such as Ras-Pi 3b+. Air Learning is an AI research platform that provides infrastructure components and tools to train a fully functional reinforcement learning policies for aerial robots. In simple environments like OpenAI gym, Atari the training and inference happens in the same environment without any randomization. In contrast to these environments, Air Learning allows us to randomize various environmental parameters such as such as arena size, number of obstacles, goal position etc.\nIn this study, we fix the arena size to 25 m × 25 m × 20 m. The maximum number of obstacles at anytime would be anywhere between one to five and is chosen randmonly on episode to episode basis. The position of these obstacles and end point (goal) are also changed every episode. We train the aerial robot to reach the end point using DQN algorithm. The input to the policy is sensor mounted on the drone along with IMU measurements. The output of the policy is one among the 25 actions with different velocity and yaw rates. The reward function we use in this study is defined based on the following equation:\nr = 1000 ∗ α− 100 ∗ β −Dg −Dc ∗ δ − 1 (1)\nHere, α is a binary variable whose value is ‘1’ if the agent reaches the goal else its value is ‘0’. β is a binary variable which is set to ‘1’ if the aerial robot collides with any obstacle or runs out of the maximum allocated steps for an episode.1 Otherwise, β is ’0’, effectively penalizing the agent for hitting an obstacle or not reaching the end point in time. Dg is the distance to the end point from the agent’s current location, motivating the agent to move closer to the goal.Dc is the distance correction which is applied to penalize the agent if it chooses actions which speed up the agent away from the goal. The distance correction term is defined as follows:\nDc = (Vmax − Vnow) ∗ tmax (2) Vmax is the maximum velocity possible for the agent which for DQN is fixed at 2.5 m/s. Vnow is the current velocity of the agent and tmax is the duration of the actuation.\nWe train three policies namely Policy I, Policy II, and Policy III. Each policy is learned through curriculum learning where we make the end goal farther away as the training progresses. We terminate the training once the agent has finished 1 Million steps. We evaluate the all the three policies in fp32 and quantized int8 data types for 100 evaluations in airlearning and report the success rate.\n1We set the maximum allowed steps in an episode as 750. This is to make sure the agent finds the end-point (goal) within some finite amount of steps.\nWe also take these policies and characterize the system performance on a Ras-pi 3b platform. Ras-Pi 3b is a proxy for the compute platform available on the aerial robot. The hardware specification for Ras-Pi 3b is shown in Table 11.\nWe allocate a region of storage space as swap memory. It is the region of memory allocated in disk that is used when system memory is utilized fully by a process. In Ras-Pi 3b, the swap memory is allocated in Flash storage." }, { "heading": "E POST-TRAINING QUANTIZATION SWEET SPOT", "text": "Figures 7 shows that there is a sweet spot for post-training quantization. Sometimes, quantizing to fewer bits outperforms higher precision quantization. Each plot was generated by applying posttraining quantization to the full precision baselines and evaluating over 10 runs." } ]
2,019
QUANTIZED REINFORCEMENT LEARNING (QUARL)
SP:8283eb652046558e12c67447dddebcb52ee9de94
[ "The paper studies self-supervised learning from very few unlabeled images, down to the extreme case where only a single image is used for training. From the few/single image(s) available for training, a data set of the same size as some unmodified reference data set (ImageNet, Cifar-10/100) is generated through heavy data augmentation (cropping, scaling, rotation, contrast changes, adding noise). Three popular self-supervised learning algorithms are then trained on this data sets, namely (Bi)GAN, RotNet, and DeepCluster, and the linear probing accuracy on different blocks is compared to that obtained by training the same methods on the reference data sets. The linear probing accuracy from the first few conv layers of the network trained on the single/few image data set is found to be comparable to or better than that of the same model trained on the full reference data set.", "This paper explores self-supervised learning in the low-data regime, comparing results to self-supervised learning on larger datasets. BiGAN, RotNet, and DeepCluster serve as the reference self-supervised methods. It argues that early layers of a convolutional neural network can be effectively learned from a single source image, with data augmentation. A performance gap exists for deeper layers, suggesting that larger datasets are required for self-supervised learning of useful filters in deeper network layers." ]
We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.
[ { "affiliations": [], "name": "Yuki M. Asano" }, { "affiliations": [], "name": "Christian Rupprecht" }, { "affiliations": [], "name": "Andrea Vedaldi" } ]
[ { "authors": [ "REFERENCES Pulkit Agrawal", "Joao Carreira", "Jitendra Malik" ], "title": "Learning to see by moving", "venue": "In Proc. ICCV, pp. 37–45", "year": 2015 }, { "authors": [ "R. Arandjelović", "A. Zisserman" ], "title": "Look, listen and learn", "venue": "In Proc. ICCV,", "year": 2017 }, { "authors": [ "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning by predicting noise", "venue": "In Proc. ICML,", "year": 2017 }, { "authors": [ "Joan Bruna", "Stéphane Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "M. Caron", "P. Bojanowski", "A. Joulin", "M. Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proc. ECCV, 2018", "year": 2018 }, { "authors": [ "N. Dalal", "B Triggs" ], "title": "Histogram of Oriented Gradients for Human Detection", "venue": "In Proc. CVPR,", "year": 2005 }, { "authors": [ "Virginia R de Sa" ], "title": "Learning classification with unlabeled data", "venue": "In NIPS, pp", "year": 1994 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Proc. CVPR,", "year": 2009 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proc. ICCV, pp", "year": 2015 }, { "authors": [ "Jeff Donahue", "Philipp Krhenbhl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "Proc. ICLR,", "year": 2017 }, { "authors": [ "A. Dosovitskiy", "P. Fischer", "J.T. Springenberg", "M. Riedmiller", "T. Brox" ], "title": "Discriminative unsupervised feature learning with exemplar convolutional neural networks", "venue": "IEEE PAMI,", "year": 2016 }, { "authors": [ "V. Dumoulin", "I. Belghazi", "B. Poole", "O. Mastropietro", "A. Lamb", "M. Arjovsky", "A. Courville" ], "title": "Adversarially learned inference", "venue": "arXiv preprint arXiv:1606.00704,", "year": 2016 }, { "authors": [ "D. Erhan", "Y. Bengio", "A. Courville", "P. Vincent" ], "title": "Visualizing higher-layer features of a deep network", "venue": "Technical Report 1341,", "year": 2009 }, { "authors": [ "Chuang Gan", "Boqing Gong", "Kun Liu", "Hao Su", "Leonidas J Guibas" ], "title": "Geometry guided convolutional neural networks for self-supervised video representation learning", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Rouhan Gao", "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Object-centric representation learning from unlabeled videos", "venue": "In Proc. ACCV,", "year": 2016 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In Proc. CVPR,", "year": 2016 }, { "authors": [ "R. Geirhos", "P. Rubisch", "C. Michaelis", "M. Bethge", "F.A. Wichmann", "W. Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Praveen Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In Proc. ICLR,", "year": 2018 }, { "authors": [ "R.B. Girshick" ], "title": "Fast R-CNN", "venue": "In Proc. ICCV,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Priya Goyal", "Dhruv Mahajan", "Abhinav Gupta", "Ishan Misra" ], "title": "Scaling and benchmarking self-supervised visual representation learning", "venue": "arXiv preprint arXiv:1905.01235,", "year": 1905 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "B. Hariharan", "J. Malik", "D. Ramanan" ], "title": "Discriminative decorrelation for clustering and classification", "venue": "In Proc. ECCV,", "year": 2012 }, { "authors": [ "M. Huh", "P. Agrawal", "A.A. Efros" ], "title": "What makes imagenet good for transfer learning", "venue": "arXiv preprint arXiv:1608.08614,", "year": 2016 }, { "authors": [ "Phillip Isola", "Daniel Zoran", "Dilip Krishnan", "Edward H Adelson" ], "title": "Learning visual groups from cooccurrences in space and time", "venue": "In Proc. ICLR,", "year": 2015 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Learning image representations tied to ego-motion", "venue": "In Proc. ICCV,", "year": 2015 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Slow and steady feature analysis: higher order temporal coherence in video", "venue": "In Proc. CVPR,", "year": 2016 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "Self-supervised feature learning by learning to spot artifacts", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "J. Johnson", "A. Alahi", "L. Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In Proc. ECCV,", "year": 2016 }, { "authors": [ "R. Kat", "R. Jevnisek", "S. Avidan" ], "title": "Matching pixels using co-occurrence statistics", "venue": "In Proc. ICCV,", "year": 2018 }, { "authors": [ "Philipp Krähenbühl", "Carl Doersch", "Jeff Donahue", "Trevor Darrell" ], "title": "Data-dependent initializations of convolutional neural networks", "venue": "Proc. ICLR,", "year": 2016 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "In NIPS, pp. 1106–1114,", "year": 2012 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Colorization as a proxy task for visual understanding", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "Hsin-Ying Lee", "Jia-Bin Huang", "Maneesh Kumar Singh", "Ming-Hsuan Yang" ], "title": "Unsupervised representation learning by sorting sequence", "venue": "In Proc. ICCV,", "year": 2017 }, { "authors": [ "D. Lowe" ], "title": "Distinctive image features from scale-invariant keypoints", "venue": "IJCV, 60(2):91–110,", "year": 2004 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "A. Mahendran", "J. Thewlis", "A. Vedaldi" ], "title": "Cross pixel optical-flow similarity for self-supervised learning", "venue": "In Proc. ACCV,", "year": 2018 }, { "authors": [ "T. Malisiewicz", "A. Gupta", "A.A. Efros" ], "title": "Ensemble of exemplar-SVMs for object detection and beyond", "venue": "In Proc. ICCV,", "year": 2011 }, { "authors": [ "Ishan Misra", "C. Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: Unsupervised learning using temporal order verification", "venue": "In Proc. ECCV,", "year": 2016 }, { "authors": [ "T Mundhenk", "Daniel Ho", "Barry Y. Chen" ], "title": "Improvements to context based self-supervised learning", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In Proc. ECCV,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Hamed Pirsiavash", "Paolo Favaro" ], "title": "Representation learning by learning to count", "venue": "In Proc. ICCV,", "year": 2017 }, { "authors": [ "Mehdi Noroozi", "Ananth Vinjimoor", "Paolo Favaro", "Hamed Pirsiavash" ], "title": "Boosting self-supervised learning via knowledge transfer", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Chris Olah", "Arvind Satyanarayan", "Ian Johnson", "Shan Carter", "Ludwig Schubert", "Katherine Ye", "Alexander Mordvintsev" ], "title": "The building blocks of interpretability", "venue": "Distill, 3(3):e10,", "year": 2018 }, { "authors": [ "B.A. Olshausen", "D.J. Field" ], "title": "Sparse coding with an overcomplete basis set: A strategy employed by V1", "venue": "Vision Research,", "year": 1997 }, { "authors": [ "M. Oquab", "L. Bottou", "I. Laptev", "J. Sivic" ], "title": "Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks", "venue": "In Proc. CVPR,", "year": 2014 }, { "authors": [ "Andrew Owens", "Phillip Isola", "Josh H. McDermott", "Antonio Torralba", "Edward H. Adelson", "William T. Freeman" ], "title": "Visually indicated sounds", "venue": "In Proc. CVPR,", "year": 2016 }, { "authors": [ "Edouard Oyallon", "Eugene Belilovsky", "Sergey Zagoruyko" ], "title": "Scaling the scattering transform: Deep hybrid networks", "venue": null, "year": 2017 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proc. CVPR,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Ross Girshick", "Piotr Dollár", "Trevor Darrell", "Bharath Hariharan" ], "title": "Learning features by watching objects move", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "Zhongzheng Ren", "Yong Jae Lee" ], "title": "Cross-domain self-supervised multi-task feature learning using synthetic imagery", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "A. Rodriguez", "V. Naresh Boddeti", "BVK V. Kumar", "A. Mahalanobis" ], "title": "Maximum margin correlation filter: A new approach for localization and classification", "venue": "IEEE Transactions on Image Processing,", "year": 2013 }, { "authors": [ "Tamar Rott Shaham", "Tali Dekel", "Tomer Michaeli" ], "title": "Singan: Learning a generative model from a single natural image", "venue": "In Computer Vision (ICCV), IEEE International Conference on,", "year": 2019 }, { "authors": [ "Pierre Sermanet" ], "title": "Time-contrastive networks: Self-supervised learning from video", "venue": "In Proc. Intl. Conf. on Robotics and Automation,", "year": 2018 }, { "authors": [ "H. Shin", "H.R. Roth", "M. Gao", "L. Lu", "Z. Xu", "I. Nogues", "J. Yao", "D. Mollura", "R.M. Summers" ], "title": "Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning", "venue": "IEEE Trans. on Medical Imaging,", "year": 2016 }, { "authors": [ "N. Srivastava", "E. Mansimov", "R. Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In Proc. ICML,", "year": 2015 }, { "authors": [ "I. Talmi", "R. Mechrez", "L. Zelnik-Manor" ], "title": "Template matching with deformable diversity similarity", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "D. Ulyanov", "A. Vedaldi", "V. Lempitsky" ], "title": "Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "D. Ulyanov", "A. Vedaldi", "V. Lempitsky" ], "title": "Deep image prior", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "X. Wang", "A. Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": "In Proc. ICCV, pp. 2794–2802,", "year": 2015 }, { "authors": [ "Xiaolong Wang", "Kaiming He", "Abhinav Gupta" ], "title": "Transitive invariance for self-supervised visual representation learning", "venue": "In Proc. ICCV,", "year": 2017 }, { "authors": [ "D. Wei", "J. Lim", "A. Zisserman", "W.T. Freeman" ], "title": "Learning and using the arrow of time", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "J. Yosinski", "J. Clune", "Y. Bengio", "H. Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In NIPS, pp", "year": 2014 }, { "authors": [ "M.D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In Proc. ECCV,", "year": 2014 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In Proc. ECCV,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A. Efros" ], "title": "Split-brain autoencoders: Unsupervised learning by crosschannel prediction", "venue": "In Proc. CVPR, 2017", "year": 2017 }, { "authors": [ "A DeepCluster" ], "title": "textures, the surprising effectiveness of training a network using a single image can be explained by the recent finding that even CNNs trained on ImageNet rely on texture (as opposed to shape) information to classify (Geirhos et al., 2019)", "venue": "RETRAINING FROM SINGLE IMAGE INITIALIZATION", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite tremendous progress in supervised learning, learning without external supervision remains difficult. Self-supervision has recently emerged as one of the most promising approaches to address this limitation. Self-supervision builds on the fact that convolutional neural networks (CNNs) transfer well between tasks (Shin et al., 2016; Oquab et al., 2014; Girshick, 2015; Huh et al., 2016). The idea then is to pre-train networks via pretext tasks that do not require expensive manual annotations and can be automatically generated from the data itself. Once pre-trained, networks can be applied to a target task by using only a modest amount of labelled data.\nEarly successes in self-supervision have encouraged authors to develop a large variety of pretext tasks, from colorization to rotation estimation and image autoencoding. Recent papers have shown performance competitive with supervised learning by learning complex neural networks on very large image datasets. Nevertheless, for a given model complexity, pre-training by using an off-theshelf annotated image datasets such as ImageNet remains much more efficient.\nIn this paper, we aim to investigate the effectiveness of current self-supervised approaches by characterizing how much information they can extract from a given dataset of images. Since deep networks learn a hierarchy of representations, we further break down this investigation on a per-layer basis. We are motivated by the fact that the first few layers of most networks extract low-level information (Yosinski et al., 2014), and thus learning them may not require the high-level semantic information captured by manual labels.\nConcretely, in this paper we answer the following simple question: “is self-supervision able to exploit the information contained in a large number of images in order to learn different parts of a neural network?”\nWe contribute two key findings. First, we show that as little as a single image is sufficient, when combined with self-supervision and data augmentation, to learn the first few layers of standard deep networks as well as using millions of images and full supervision (Figure 1). Hence, while selfsupervised learning works well for these layers, this may be due more to the limited complexity of such features than the strength of the supervisory technique. This also confirms the intuition that early layers in a convolutional network amounts to low-level feature extractors, analogous to early\nconv1 conv2 conv3 conv4 conv5 0\n20\n40\n60\n80 100 % su pe rv ise d pe rfo rm an ce\nLinear Classifier on ImageNet\nRandom RotNet 1-RotNet BiGAN 1-BiGAN DeepCluster 1-DeepCluster\nFigure 1: Single-image self-supervision. We show that several self-supervision methods can be used to train the first few layers of a deep neural networks using a single training image, such as this Image A, B or even C (above), provided that sufficient data augmentation is used.\nlearned and hand-crafted features for visual recognition (Olshausen & Field, 1997; Lowe, 2004; Dalal & Triggs, 2005). Finally, it demonstrates the importance of image transformations in learning such low-level features as opposed to image diversity.1\nOur second finding is about the deeper layers of the network. For these, self-supervision remains inferior to strong supervision even if millions of images are used for training. Our finding is that this is unlikely to change with the addition of more data. In particular, we show that training these layers with self-supervision and a single image already achieves as much as two thirds of the performance that can be achieved by using a million different images.\nWe show that these conclusions hold true for three different self-supervised methods, BiGAN (Donahue et al., 2017), RotNet (Gidaris et al., 2018) and DeepCluster (Caron et al., 2018), which are representative of the spectrum of techniques that are currently popular. We find that performance as a function of the amount of data is dependent on the method, but all three methods can indeed leverage a single image to learn the first few layers of a deep network almost “perfectly”.\nOverall, while our results do not improve self-supervision per-se, they help to characterize the limitations of current methods and to better focus on the important open challenges." }, { "heading": "2 RELATED WORK", "text": "Our paper relates to three broad areas of research: (a) self-supervised/unsupervised learning, (b) learning from a single sample, and (c) designing/learning low-level feature extractors. We discuss closely related work for each.\nSelf-supervised learning: A wide variety of proxy tasks, requiring no manual annotations, have been proposed for the self-training of deep convolutional neural networks. These methods use various cues and tasks namely, in-painting (Pathak et al., 2016), patch context and jigsaw puzzles (Doersch et al., 2015; Noroozi & Favaro, 2016; Noroozi et al., 2018; Mundhenk et al., 2017), clustering (Caron et al., 2018), noise-as-targets (Bojanowski & Joulin, 2017), colorization (Zhang et al., 2016; Larsson et al., 2017), generation (Jenni & Favaro, 2018; Ren & Lee, 2018; Donahue et al., 2017), geometry (Dosovitskiy et al., 2016; Gidaris et al., 2018) and counting (Noroozi et al., 2017). The idea is that the pretext task can be constructed automatically and easily on images alone. Thus, methods often modify information in the images and require the network to recover them. Inpainting or colorization techniques fall in this category. However these methods have the downside that the features are learned on modified images which potentially harms the generalization to unmodified ones. For example, colorization uses a gray scale image as input, thus the network cannot learn to extract color information, which can be important for other tasks.\nSlightly less related are methods that use additional information to learn features. Here, often temporal information is used in the form of videos. Typical pretext tasks are based on temporalcontext (Misra et al., 2016; Wei et al., 2018; Lee et al., 2017; Sermanet et al., 2018), spatio-temporal\n1Example applications that only rely on low-level feature extractors include template matching (Kat et al., 2018; Talmi et al., 2017) and style transfer (Gatys et al., 2016; Johnson et al., 2016), which currently rely on pre-training with millions of images.\ncues (Isola et al., 2015; Gao et al., 2016; Wang et al., 2017), foreground-background segmentation via video segmentation (Pathak et al., 2017), optical-flow (Gan et al., 2018; Mahendran et al., 2018), future-frame synthesis (Srivastava et al., 2015), audio prediction from video (de Sa, 1994; Owens et al., 2016), audio-video alignment (Arandjelović & Zisserman, 2017), ego-motion estimation (Jayaraman & Grauman, 2015), slow feature analysis with higher order temporal coherence (Jayaraman & Grauman, 2016), transformation between frames (Agrawal et al., 2015) and patch tracking in videos (Wang & Gupta, 2015). Since we are interested in learning features from as little data as one image, we cannot make use of methods that rely on video input.\nOur contribution inspects three unsupervised feature learning methods that use very different means of extracting information from the data: BiGAN (Donahue et al., 2017) utilizes a generative adversarial task, RotNet (Gidaris et al., 2018) exploits the photographic bias in the dataset and DeepCluster (Caron et al., 2018) learns stable feature representations under a number of image transformations by proxy labels obtained from clustering. These are described in more detail in the Methods section.\nLearning from a single sample: In some applications of computer vision, the bold idea of learning from a single sample comes out of necessity. For general object tracking, methods such as max margin correlation filters (Rodriguez et al., 2013) learn robust tracking templates from a single sample of the patch. A single image can also be used to learn and interpolate multi-scale textures with a GAN framework (Rott Shaham et al., 2019). Single sample learning was pursued by the semi-parametric exemplar SVM model (Malisiewicz et al., 2011). They learn one SVM per positive sample separating it from all negative patches mined from the background. While only one sample is used for the positive set, the negative set consists of thousands of images and is a necessary component of their method. The negative space was approximated by a multi-dimensional Gaussian by the Exemplar LDA (Hariharan et al., 2012). These SVMs, one per positive sample, are pooled together using a max aggregation. We differ from both of these approaches in that we do not use a large collection of negative images to train our model. Instead we restrict ourselves to a single or a few images with a systematic augmentation strategy.\nClassical learned and hand-crafted low-level feature extractors: Learning and hand-crafting features pre-dates modern deep learning approaches and self-supervision techniques. For example the classical work of (Olshausen & Field, 1997) shows that edge-like filters can be learned via sparse coding of just 10 natural scene images. SIFT (Lowe, 2004) and HOG (Dalal & Triggs, 2005) have been used extensively before the advent of convolutional neural networks and, in many ways, they resemble the first layers of these networks. The scatter transform of Bruna & Mallat (2013); Oyallon et al. (2017) is an handcrafted design that aims at replacing at least the first few layers of a deep network. While these results show that effective low-level features can be handcrafted, this is insufficient to clarify the power and limitation of self-supervision in deep networks. For instance, it is not obvious whether deep networks can learn better low level features than these, how many images may be required to learn them, and how effective self-supervision may be in doing so. For instance, as we also show in the experiments, replacing low-level layers in a convolutional networks with handcrafted features such as Oyallon et al. (2017) may still decrease the overall performance of the model. Furthermore, this says little about deeper layers, which we also investigate.\nIn this work we show that current deep learning methods learn slightly better low-level representations than hand crafted features such as the scattering transform. Additionally, these representations can be learned from one single image with augmentations and without supervision. The results show how current self-supervised learning approaches that use one million images yield only relatively small gains when compared to what can be achieved from one image and augmentations, and motivates a renewed focus on augmentations and incorporating prior knowledge into feature extractors." }, { "heading": "3 METHODS", "text": "We discuss first our data and data augmentation strategy (section 3.1) and then we summarize the three different methods for unsupervised feature learning used in the experiments (section 3.2)." }, { "heading": "3.1 DATA", "text": "Our goal is to understand the performance of representation learning methods as a function of the image data used to train them. To make comparisons as fair as possible, we develop a protocol where only the nature of the training data is changed, but all other parameters remain fixed.\nIn order to do so, given a baseline method trained on d source images, we replace those with another set of d images. Of these, now onlyN d are source images (i.e. i.i.d. samples), while the remaining d−N are augmentations of the source ones. Thus, the amount of information in the training data is controlled by N and we can generate a continuum of datasets that vary from one extreme, utilizing a single source image N =1, to the other extreme, using all N =d original training set images. For example, if the baseline method is trained on ImageNet, then d=1,281,167. When N=1, it means that we train the method using a single source image and generate the remaining 1,281,166 images via augmentation. Other baselines use CIFAR-10/100 images, so in those cases d=50,000 instead.\nThe data augmentation protocol, is an extreme version of augmentations already employed by most deep learning protocols. Each method we test, in fact, already performs some data augmentation internally. Thus, when the method is applied on our augmented data, this can be equivalently thought of as incrementing these “native” augmentations by concatenating them with our own.\nChoice of augmentations. Next, we describe how theN source images are expanded to additional d−N images so that the models can be trained on exactly d images, independent from the choice of N . The idea is to use an aggressive form of data augmentation involving cropping, scaling, rotation, contrast changes, and adding noise. These transformations are representative of invariances that one may wish to incorporate in the features. Augmentation can be seen as imposing a prior on how we expect the manifold of natural images to look like. When training with very few images, these priors become more important since the model cannot extract them directly from data.\nGiven a source image of size sizeH×W , we first extract a certain number of random patches of size (w, h), where w ≤ W and h ≤ H satisfy the additional constraints β ≤ whWH and γ ≤ h w ≤ γ\n−1. Thus, the smallest size of the crops is limited to be at least βWH and at most the whole image. Additionally, changes to the aspect ratio are limited by γ. In practice we use β = 10−3 and γ = 34 .\nSecond, good features should not change much by small image rotations, so images are rotated (before cropping to avoid border artifacts) by α ∈ (−35, 35) degrees. Due to symmetry in image statistics, images are also flipped left-to-right with 50% probability.\nIllumination changes are common in natural images, we thus expect image features to be robust to color and contrast changes. Thus, we employ a set of linear transformations in RGB space to model this variability in real data. Additionally, the color/intensity of single pixels should not affect the feature representation, as this does not change the contents of the image. To this end, color jitter with additive brightness, contrast and saturation are sampled from three uniform distributions in (0.6, 1.4) and hue noise from (−0.1, 0.1) is applied to the image patches. Finally, the cropped and transformed patches are scaled to the color range (−1, 1) and then rescaled to full S × S resolution to be supplied to each representation learning method, using bilinear interpolation. This formulation ensures that the patches are created in the target resolution S, independent from the size and aspect ratio W,H of the source image.\nReal samples. The images used for the N=1 and N=10 experiments are shown in Figure 1 and the appendix respectively (this is all the training data used in such experiments). For the special case of using a single training image, i.e. N=1, we have chosen one photographic (2560×1920) and one drawn image (600×225), which we call Image A and Image B, respectively. The two images were manually selected as they contain rich texture and are diverse, but their choice was not optimized for performance. We test only two images due to the cost of running a full set of experiments (each image is expanded up to 1.2M times for training some of the models, as explained above). However, this is sufficient to prove our main points. We also test another (1165×585) Image C to ablate the “crowdedness” of an image, as this latter contains large areas covering no objects. While resolution matters to some extent as a bigger image contains more pixels, the information within is still far more correlated, and thus more redundant than sampling several smaller images. In particular, the resolution difference in Image A and B appears to be negligible in our experiments. For CIFAR-10, where S=32 we only use Image B due to the resolution difference. In direct comparison, Image B\nis the size of about 132 CIFAR images which is still much less than d = 50,000. For N > 1, we select the source images randomly from each method’s training set." }, { "heading": "3.2 REPRESENTATION LEARNING METHODS", "text": "Generative models. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) learn to generate images using an adversarial objective: a generator network maps noise samples to image samples, approximating a target image distribution and a discriminator network is tasked with distinguishing generated and real samples. Generator and discriminator are pitched one against the other and learned together; when an equilibrium is reached, the generator produces images indistinguishable (at least from the viewpoint of the discriminator) from real ones.\nBidirectional Generative Adversarial Networks (BiGAN) (Donahue et al., 2017; Dumoulin et al., 2016) are an extension of GANs designed to learn a useful image representation as an approximate inverse of the generator through joint inference on an encoding and the image. This method’s native augmentation uses random crops and random horizontal flips to learn features from S = 128 sized images. As opposed to the other two methods discussed below it employs leaky ReLU nonlinearities as is typical in GAN discriminators.\nRotation. Most image datasets contain pictures that are ‘upright’ as this is how humans prefer to take and look at them. This photographer bias can be understood as a form of implicit data labelling. RotNet (Gidaris et al., 2018) exploits this by tasking a network with predicting the upright direction of a picture after applying to it a random rotation multiple of 90 degrees (in practice this is formulated as a 4-way classification problem). The authors reason that the concept of ‘upright’ requires learning high level concepts in the image and hence this method is not vulnerable to exploiting low-level visual information, encouraging the network to learn more abstract features. In our experiments, we test this hypothesis by learning from impoverished datasets that may lack the photographer bias. The native augmentations that RotNet uses on the S=256 inputs only comprise horizontal flips and non-scaled random crops to 224× 224.\nClustering. DeepCluster (Caron et al., 2018) is a recent state-of-the-art unsupervised representation learning method. This approach alternates k-means clustering to produce pseudo-labels for the data and feature learning to fit the representation to these labels. The authors attribute the success of the method to the prior knowledge ingrained in the structure of the convolutional neural network (Ulyanov et al., 2018).\nThe method alternatives between a clustering step, in which k-means is applied on the PCA-reduced features with k = 104, and a learning step, in which the network is trained to predict the cluster ID for each image under a set of augmentations (random resized crops with β = 0.08, γ = 34 and horizontal flips) that constitute its native augmentations used on top of the S=256 input images." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the representation learning methods on ImageNet and CIFAR-10/100 using linear probes (Section 4.1). After ablating various choices of transformations in our augmentation protocol (Section 4.2), we move to the core question of the paper: whether a large dataset is beneficial to unsupervised learning, especially for learning early convolutional features (Section 4.3)." }, { "heading": "4.1 LINEAR PROBES AND BASELINE ARCHITECTURE", "text": "In order to quantify if a neural network has learned useful feature representations, we follow the standard approach of using linear probes (Zhang et al., 2017). This amounts to solving a difficult task such as ImageNet classification by training a linear classifier on top of pre-trained feature representations, which are kept fixed. Linear classifiers heavily rely on the quality of the representation since their discriminative power is low.\nWe apply linear probes to all intermediate convolutional layers of networks and train on the ImageNet LSVRC-12 (Deng et al., 2009) and CIFAR-10/100 (Krizhevsky, 2009) datasets, which are the standard benchmarks for evaluation in self-supervised learning. Our base encoder architecture is AlexNet (Krizhevsky et al., 2012) with BatchNorm, since this is a good representative model and is most often used in other unsupervised learning work for the purpose of benchmarking. This model\nCIFAR-10 conv1 conv2 conv3 conv4\n(a) Fully sup. 66.5 70.1 72.4 75.9 (b) Random feat. 57.8 55.5 54.2 47.3\n(c) No aug. 57.9 56.2 54.2 47.8\n(d) Jitter 58.9 58.0 57.0 49.8 (e) Rotation 61.4 58.8 56.1 47.5 (f) Scale 67.9 69.3 67.9 59.1\n(g) Rot. & jitter 64.9 63.6 61.0 53.4 (h) Rot. & scale 67.6 69.9 68.0 60.7 (i) Jitter & scale 68.1 71.3 69.5 62.4\n(j) All 68.1 72.3 70.8 63.5\nhas five convolutional blocks (each comprising a linear convolution later followed by ReLU and optionally max pooling). We insert the probes right after the ReLU layer in each block, and denote these entry points conv1 to conv5. Applying the linear probes at each convolutional layer allows studying the quality of the representation learned at different depths of the network.\nDetails. While linear probes are conceptually straightforward, there are several technical details that affect the final accuracy by a few percentage points. Unfortunately, prior work has used several slightly different setups, so that comparing results of different publications must be done with caution. To make matters more difficult, not all papers released evaluation source code. We prove this standardized testing code here2.\n2https://github.com/yukimasano/linear-probes\nIn our implementation, we follow the original proposal (Zhang et al., 2017) in pooling each representation to a vector with 9600, 9216, 9600, 9600, 9216 dimensions for conv1-5 using adaptive max-pooling, and absorb the batch normalization weights into the preceding convolutions. For evaluation on ImageNet we follow RotNet to train linear probes: images are resized such that the shorter edge has a length of 256 pixels, random crops of 224×224 are computed and flipped horizontally with 50% probability. Learning lasts for 36 epochs and the learning rate schedule starts from 0.01 and is divided by five at epochs 5, 15 and 25. The top-1 accuracy of the linear classifier is then measured on the ImageNet validation subset. This uses DeepCluster’s protocol, extracting 10 crops for each validation image (four at the corners and one at the center along with their horizontal flips) and averaging the prediction scores before the accuracy is computed. For CIFAR-10/100 data, we follow the same learning rate schedule and for both training and evaluation we do not reduce the dimensionality of the representations and keep the images’ original size of 32×32." }, { "heading": "4.2 EFFECT OF AUGMENTATIONS", "text": "In order to better understand which image transformations are important to learn a good feature representations, we analyze the impact of augmentation settings. For speed, these experiments are conducted using the CIFAR-10 images (d = 50, 000 in the training set) and with the smaller source Image B and a GAN using the Wasserstein GAN formulation with gradient penalty (Gulrajani et al., 2017). The encoder is a smaller AlexNet-like CNN consisting of four convolutional layers (kernel sizes: 7, 5, 3, 3; strides: 3, 2, 2, 1) followed by a single fully connected layer as the discriminator. Given that the GAN is trained on a single image (w/ augmentations), we call this setting MonoGAN.\nTable 1 reports all 23 combinations of the three main augmentations (scale, rotation, and jitter) and a randomly initialized network baseline (see Table 1 (b)) using the linear probes protocol discussed above. Without data augmentation the model only achieves marginally better performance than the random network (which also achieves a non-negligible level of performance (Ulyanov et al., 2017; Caron et al., 2018)). This is understandable since the dataset literally consists of a single training image cloned d times. Color jitter and rotation slightly improve the performance of all probes by 1- 2% points, but random rescaling adds at least ten points at every depth (see Table 1 (f,h,i)) and is the most important single augmentation. A similar conclusion can be drawn when two augmentations are combined, although there are diminishing returns as more augmentations are combined. Overall, we find all three types of augmentations are of importance when training in the ultra-low data setting." }, { "heading": "4.3 BENCHMARK EVALUATION", "text": "We analyze how performance varies as a function N , the number of actual samples that are used to generated the augmented datasets, and compare it to the gold-standard setup (in terms of choice of training data) defined in the papers that introduced each method. The evaluation is again based on linear probes (Section 4.1).\nMono is enough. From Table 2 we make the following observations. Training with just a single source image (f,g,k,l,p,q) is much better than random initialization (c) for all layers. Notably, these models also outperform Gabor-like filters from Scattering networks (Bruna & Mallat, 2013), which are hand crafted image features, replacing the first two convolutional layers as in (Oyallon et al., 2017). Using the same protocol as in the paper, this only achieves an accuracy of 18.9% compared to (p)’s conv2 > 30%.\nMore importantly, when comparing within pretext task, even with one image we are able to improve the quality of conv1–conv3 features compared to full (unsupervised) ImageNet training for GAN based self-supervision (e-i). For the other methods (j-n, o-s) we reach and also surpass the performance for the first layer and are within 1.5% points for the second. Given that the best unsupervised performance for conv2 is 32.5, our method using a single source Image A (Table 2, p) is remarkably close with 31.5.\nImage contents. While we surpass the GAN based approach of (Donahue et al., 2017) for both single source images, we find more nuanced results for the other two methods: For RotNet, as expected, the photographic bias cannot be extracted from a single image. Thus its performance is low with little training data and increases together with the number of images (Table 2, j-n). When comparing Image A and B trained networks for RotNet, we find that the photograph yields better performance than the hand drawn animal image. This indicates that the method can extract rotation" }, { "heading": "20.4 19.9 20.7 20.5 17.8 19.7", "text": "information from low level image features such as patches which is at first counter intuitive. Considering that the hand-drawn image does not work well, we can assume that lighting and shadows even in small patches can indeed give important cues on the up direction which can be learned even from a single (real) image. DeepCluster shows poor performance in conv1 which we can improve upon in the single image setting (Table 2, o-r). Naturally, the image content matters: a trivial image without any image gradient (e.g. picture of a white wall) would not provide enough signal for any method. To better understand this issue, we also train DeepCluster on the much less cluttered Image C to analyze how much the image influences our claims. We find that even though this image contains large parts of sky and sea, the performance is only slightly lower than that of Image A. This finding indicates that the augmentations can even compensate for large untextured areas and the exact choice of image is not critical.\nMore than one image. While BiGAN fails to converge for N ∈{10, 1000}, most likely due to issues in learning from a distribution which is neither whole images nor only patches, we find that both RotNet and DeepCluster improve their performance in deeper layers when increasing the number of training images. However, for conv1 and conv2, a single image is enough. In deeper layers, DeepCluster seems to require large amounts of source images to yield the reported results as the deka- and kilo- variants start improving over the single image case (Table 2, o-t). This need for data also explains the gap between the two input images which have different resolutions. Summarizing Table 2, we can conclude that learning conv1, conv2 and for the most part conv3 (33.4 vs. 39.4) on over 1M images does not yield a significant performance increase over using one single training image — a highly unexpected result.\nGeneralization. In Table 3, we show the results of training linear classifiers for the CIFAR-10 dataset and compare against various baselines. We find that the GAN trained on the smaller Image B outperforms all other methods including the fully-supervised trained one for the first convolutional layer. We also outperform the same architecture trained on the full CIFAR-10 training set using RotNet, which might be due to the fact that either CIFAR images do not contain much information about the orientation of the picture or because they do not contain as many objects as in ImageNet. While the GAN trained on the whole dataset outperforms the MonoGAN on the deeper layers, the gap stays very small until the last layer. These findings are also reflected in the experiments on the CIFAR-100 dataset shown in Table 3. We find that our method obtains the best performance for the first two layers, even against the fully supervised version. The gap between our mono variant and the other methods increases again with deeper layers, hinting to the fact that we cannot learn very high level concepts in deeper layers from just one single image. These results corroborate the finding that our method allows learning very generalizable early features that are not domain dependent." }, { "heading": "4.4 QUALITATIVE ANALYSIS", "text": "Visual comparison of weights. In Figure 2, we compare the learned filters of all first-layer convolutions of an AlexNet trained with the different methods and a single image. First, we find that the filters closely resemble those obtained via supervised training: Gabor-like edge detectors and various color blobs. Second, we find that the look is not easily predictive of its performance, e.g.\nwhile generatively learned filters (BiGAN) show many edge detectors, its linear probes performance is about the same as that of DeepCluster which seems to learn many somewhat redundant point features. However, we also find that some edge detectors are required, as we can confirm from RotNet and DeepCluster trained on Image B, which yield less crisp filters and worse performances.\nFine-tuning instead of freezing. In Tab. 4, we show the results of retraining a network with the first two convolutional filters, or the scattering transform from (Oyallon et al., 2017), left frozen. We observe that our single image trained DeepCluster and BiGAN models achieve performances closes to the supervised benchmark. Notably, the scattering transform as a replacement for conv1-2 performs slightly worse than the analyzed single image methods. We also show in the appendix the results of retraining a network initialized with the first two convolutional layers obtained from a single image and subsequently linearly probing the model. The results are shown in Appendix Tab. 5 and we find that we can recover the performance of fully-supervised networks, i.e. the first two convolutional filters trained from just a single image generalize well and do not get stuck in an image specific minimum.\nNeural style transfer. Lastly, we show how our features trained on only a single image can be used for other applications. In Figure 3 we show two basic style transfers using the method of (Gatys et al., 2016) from an official PyTorch tutorial3. Image content and style are separated and the style is transferred from the source to target image using all CNN features, not just the shallow layers. We visually compare the results of using our features and from full ImageNet supervision. We find almost no visual differences in the stylized images and can conclude that our early features are equally powerful as fully supervised ones for this task." }, { "heading": "5 CONCLUSIONS", "text": "We have made the surprising observation that we can learn good and generalizable features through self-supervision from one single source image, provided that sufficient data augmentation is used. Our results complement recent works (Mahajan et al., 2018; Goyal et al., 2019) that have investigated self-supervision in the very large data regime. Our main conclusion is that these methods succeed perfectly in capturing the simplest image statistics, but that for deeper layers a gap exist with strong supervision which is compensated only in limited manner by using large datasets. This novel finding motivates a renewed focus on the role of augmentations in self-supervised learning and critical rethinking of how to better leverage the available data.\nACKNOWLEDGEMENTS.\nWe thank Aravindh Mahendran for fruitful discussions. Yuki Asano gratefully acknowledges support from the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines & Systems (EP/L015897/1). The work is supported by ERC IDIU-638009.\n3https://pytorch.org/tutorials/advanced/neural_style_tutorial.html" }, { "heading": "A APPENDIX", "text": "A.1 IMAGENET TRAINING IMAGES\nThe images used for the N=10 experiments are shown in fig. 4.\nA.2 VISUAL COMPARISON OF FILTERS\nIn order to understand what deeper neurons are responding to in our model, we visualize random neurons via activation maximization (Erhan et al., 2009; Zeiler & Fergus, 2014) in each layer. Additionally, we retrieve the top-9 images in the ImageNet training set that activate each neuron most in Figure 5. Since the mono networks are not trained on the ImageNet dataset, it can be used here for visualization. From the first convolutional layer we find typical neurons strongly reacting to oriented edges. In layers 2-4 we find patterns such as grids (conv2:3), and textures such as leopard skin (conv2:2) and round grid cover (conv4:4). Confirming our hypothesis that the neural network is only extracting patterns and not semantic information, we do not find any neurons particularly specialized to certain objects even in higher levels as for example dog faces or similar which can be fund in supervised networks. This finding aligns with the observations of other unsupervised methods (Caron et al., 2018; Zhang et al., 2017). As most neurons extract simple patterns and\ntextures, the surprising effectiveness of training a network using a single image can be explained by the recent finding that even CNNs trained on ImageNet rely on texture (as opposed to shape) information to classify (Geirhos et al., 2019).\nA.3 RETRAINING FROM SINGLE IMAGE INITIALIZATION\nIn Table 5, we initialize AlexNet models using the first two convolutional filters learned from a single image and retrain them using ImageNet. We find that the networks recover their performance fully and the first filters do not make the network stuck in a bad local minimum despite having been trained on a single image from a different distribution. The difference from the BiGAN to the full supervision model is likely due to it using a smaller input resolution (112 instead of 224), as the BiGAN’s output resolution is limited.\nA.4 LINEAR PROBES ON IMAGENET\nWe show two plots of the ImageNet linear probes results (Table 2 of the paper) in fig. 6. On the left we plot performance per layer in absolute scale. Naturally the performance of the supervised model improves with depth, while all unsupervised models degrade after conv3. From the relative plot on the right, it becomes clear that with our training scheme, we can even slightly surpass supervised performance on conv1 presumably since our model is trained with sometimes very small patches, thus receiving an emphasis on learning good low level filters. The gap between all self-supervised methods and the supervised baseline increases with depth, due to the fact that the supervised model is trained for this specific task, whereas the self-supervised models learn from a surrogate task without labels.\nA.5 EXAMPLE AUGMENTED TRAINING DATA\nIn figs. 7 to 10 we show example patches generated by our augmentation strategy for the datasets with different N. Even though the images and patches are very different in color and shape distribu-\ntion, our model learns weights that perform similarly in the linear probes benchmark (see Table 2 in the paper)." } ]
2,020
A CRITICAL ANALYSIS OF SELF-SUPERVISION, OR WHAT WE CAN LEARN FROM A SINGLE IMAGE
SP:5abcf6f6bd3c0079e6f942f614949a3f566afed8
[ "In this paper, the authors propose a method to perform architecture search on the number of channels in convolutional layers. The proposed method, called AutoSlim, is a one-shot approach based on previous work of Slimmable Networks [2,3]. The authors have tested the proposed methods on a variety of architectures on ImageNet dataset. ", "This paper proposes a simple and one-shot approach on neural architecture search for the number of channels to achieve better accuracy. Rather than training a lot of network samples, the proposed method trains a single slimmable network to approximate the network accuracy of different channel configurations. The experimental results show that the proposed method achieves better performance than the existing baseline methods." ]
We study how to set the number of channels in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot approach, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods with 100× lower search cost. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs).
[]
[ { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Training pruned neural networks", "venue": "CoRR, abs/1803.03635,", "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Dongyoon Han", "Jiwhan Kim", "Junmo Kim" ], "title": "Deep pyramidal residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Yihui He", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Li-Jia Li", "Song Han" ], "title": "Amc: Automl for model compression and acceleration on mobile devices", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Nicolas Heess", "Srinivasan Sriram", "Jay Lemmon", "Josh Merel", "Greg Wayne", "Yuval Tassa", "Tom Erez", "Ziyu Wang", "SM Eslami", "Martin Riedmiller" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Danlu Chen", "Tianhong Li", "Felix Wu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Multi-scale dense networks for resource efficient image classification", "venue": "arXiv preprint arXiv:1703.09844,", "year": 2017 }, { "authors": [ "Qiangui Huang", "Kevin Zhou", "Suya You", "Ulrich Neumann" ], "title": "Learning to prune filters in convolutional neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Zehao Huang", "Naiyan Wang" ], "title": "Data-driven sparse structure selection for deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "arXiv preprint arXiv:1810.02340,", "year": 2018 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "arXiv preprint arXiv:1608.08710,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Oriol Vinyals", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture search", "venue": "arXiv preprint arXiv:1711.00436,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "arXiv preprint arXiv:1810.05270,", "year": 2018 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "Thinet: A filter level pruning method for deep neural network compression", "venue": "arXiv preprint arXiv:1707.06342,", "year": 2017 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", "venue": "arXiv preprint arXiv:1801.04381,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Quoc V Le" ], "title": "Mnasnet: Platformaware neural architecture search for mobile", "venue": "arXiv preprint arXiv:1807.11626,", "year": 2018 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "arXiv preprint arXiv:1711.09485,", "year": 2017 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Tien-Ju Yang", "Andrew Howard", "Bo Chen", "Xiao Zhang", "Alec Go", "Mark Sandler", "Vivienne Sze", "Hartwig Adam" ], "title": "Netadapt: Platform-aware neural network adaptation for mobile applications", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jianbo Ye", "Xin Lu", "Zhe Lin", "James Z Wang" ], "title": "Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers", "venue": "arXiv preprint arXiv:1802.00124,", "year": 2018 }, { "authors": [ "Jiahui Yu", "Thomas Huang" ], "title": "Universally slimmable networks and improved training techniques", "venue": "arXiv preprint arXiv:1903.05134,", "year": 2019 }, { "authors": [ "Chris Zhang", "Mengye Ren", "Raquel Urtasun" ], "title": "Graph hypernetworks for neural architecture search", "venue": "arXiv preprint arXiv:1810.05749,", "year": 2018 }, { "authors": [ "Ke Zhang", "Liru Guo", "Ce Gao", "Zhenbing Zhao" ], "title": "Pyramidal ror for image classification", "venue": "Cluster Computing,", "year": 2017 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "arXiv preprint arXiv:1707.01083,", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The channel configuration (a.k.a.. filter numbers or channel numbers) of a neural network plays a critical role in its affordability on resource constrained platforms, such as mobile phones, wearables and Internet of Things (IoT) devices. The most common constraints (Liu et al., 2017b; Huang et al., 2017; Wang et al., 2017; Han et al., 2015a), i.e., latency, FLOPs and runtime memory footprint, are all bound to the number of channels. For example, in a single convolution or fully-connected layer, the FLOPs (number of Multiply-Adds) increases linearly by the output channels. The memory footprint can also be reduced (Sandler et al., 2018) by reducing the number of channels in bottleneck convolutions for most vision applications (Sandler et al., 2018; Howard et al., 2017; Ma et al., 2018; Zhang et al., 2017b).\nDespite its importance, the number of channels has been chosen mostly based on heuristics. LeNet5 (LeCun et al., 1998) selected 6 channels in its first convolution layer, which is then projected to 16 channels after sub-sampling. AlexNet (Krizhevsky et al., 2012) adopted five convolutions with channels equal to 96, 256, 384, 384 and 256. A commonly used heuristic, the “half size, double channel” rule, was introduced in VGG nets (Simonyan & Zisserman, 2014), if not earlier. The rule is that when spatial size of feature map is halved, the number of filters is doubled. This heuristic has been more-or-less used in followup network architecture designs including ResNets (He et al., 2016; Xie et al., 2017), Inception nets (Szegedy et al., 2015; 2016; 2017), MobileNets (Sandler et al., 2018; Howard et al., 2017) and networks for many vision applications. Other heuristics have also been explored. For example, the pyramidal rule (Han et al., 2017; Zhang et al., 2017a) suggested to gradually increase the channels in all convolutions layer by layer, regardless of spatial size. Figure 1 visually summarizes these heuristics for setting channel numbers in a neural network.\nBeyond the macro-level heuristics across entire network, recent works (Sandler et al., 2018; He et al., 2016; Zhang et al., 2017a; Tan et al., 2018; Cai et al., 2018) have also digged into channel configuration for micro-level building blocks (a network building block is usually composed of\nseveral 1×1 and 3×3 convolutions). These micro-level heuristics have led to better speed-accuracy trade-offs. The first of its kind, bottleneck residual block, was introduced in ResNet (He et al., 2016). It is composed of 1× 1, 3× 3, and 1× 1 convolutions, where the 1× 1 layers are responsible for reducing and then restoring dimensions, leaving the 3 × 3 layer a bottleneck (4× reduction). MobileNet v2 (Sandler et al., 2018), however, argued that the bottleneck design is not efficient and proposed the inverted residual block where 1 × 1 layers are used for expanding feature first (6× expansion) and then projecting back after intermediate 3 × 3 depthwise convolution. Furthermore, MNasNet (Tan et al., 2018) and ProxylessNAS nets (Cai et al., 2018) included 3× expansion version of inverted residual block into search space, and achieved even better accuracy under similar runtime latency.\nApart from these human-designed heuristics, efforts on automatically optimizing channel configuration have been made explicitly or implicitly. A recent work (Liu et al., 2018c) suggested that many network pruning methods (Liu et al., 2017b; Li et al., 2016; Luo et al., 2017; He et al., 2017; Huang & Wang, 2018; Han et al., 2015b) can be thought of as performing network architecture search for channel numbers. Liu et al. (Liu et al., 2018c) showed that training these pruned architectures from scratch leads to similar or even better performance than fine-tuning and pruning from a large model. More recently, MNasNet (Tan et al., 2018) proposed to directly search network architectures, including filter sizes, using reinforcement learning algorithms (Schulman et al., 2017; Heess et al., 2017). Although the search is performed on the factorized hierarchical search space, massive network samples and computational cost (Tan et al., 2018) are required for an optimized network architecture.\nIn this work, we study how to set channel numbers in a neural network to achieve better accuracy under constrained resources. To start, the first and the most brute-force approach came in mind is the exhaustive search: training all possible channel configurations of a deep neural network for full epochs (e.g., MobileNets (Sandler et al., 2018; Howard et al., 2017) are trained for approximately 480 epochs on ImageNet). Then we can simply select the best performers that are qualified for efficiency constraints. However, it is undoubtedly impractical since the cost of this brute-force approach is too high. For example, we consider a 8-layer convolutional networks and a search space limited to 10 candidates of channel numbers (e.g., 32, 64, ..., 320) for each layer. As a result, there are totally 108 candidate network architectures.\nTo address this challenge, we present a simple and one-shot solution AutoSlim. Our main idea lies in training a slimmable network (Yu et al., 2018) to approximate the network accuracy of different channel configurations. Yu et al. (Yu et al., 2018; Yu & Huang, 2019) introduced slimmable networks that can run at arbitrary width with equally or even better performance than same architecture trained individually. Although the original motivation is to provide instant and adaptive accuracyefficiency trade-offs, we find slimmable networks are especially suitable as benchmark performance estimators for several reasons: (1) Training slimmable models (using the sandwich rule (Yu &\nHuang, 2019)) is much faster than the brute-force approach. (2) A trained slimmable model can execute at arbitrary width, which can be used to approximate relative performance among different channel configurations. (3) The same trained slimmable model can be applied on search of optimal channels for different resource constraints.\nIn AutoSlim, we first train a slimmable model for a few epochs (e.g., 10% to 20% of full training epochs) to quickly get a benchmark performance estimator. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop on validation set (for ImageNet, we randomly hold out 50K samples of training set as validation set). After this single pass, we can obtain the optimized channel configurations under different resource constraints (e.g., network FLOPs limited to 150M, 300M and 600M). Finally we train these optimized architectures individually or jointly (as a single slimmable network) for full training epochs. We experiment with various networks including MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on the challenging setting of 1000-class ImageNet classification. AutoSlim achieves better results (with much lower search cost) compared with three baselines: (1) the default channel configuration of these networks, (2) channel pruning methods on same network architectures (Luo et al., 2017; He et al., 2017; Yang et al., 2018) and (3) reinforcement learning based architecture search methods (He et al., 2018; Tan et al., 2018)." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ARCHITECTURE SEARCH FOR CHANNEL NUMBERS", "text": "In this part, we mainly discuss previous methods on automatic architecture search for channel numbers. Human-designed heuristics have been introduced in Section 1 and visually summarized in Figure 1.\nChannel Pruning. Channel pruning (a.k.a., network slimming) methods (Liu et al., 2017b; He et al., 2017; Ye et al., 2018; Huang et al., 2018; Lee et al., 2018) aim at reducing effective channels of a large neural network to speedup its inference. Both training-based, inference-time and initializationtime pruning methods have been proposed (Liu et al., 2017b; He et al., 2017; Ye et al., 2018; Huang et al., 2018; Lee et al., 2018; Frankle & Carbin, 2018) in the literature. Here we selectively review two methods (Liu et al., 2017b; He et al., 2017). He et al. (He et al., 2017) proposed an inferencetime approach based on an iterative two-step algorithm: the LASSO based channel selection and the least square feature reconstruction. Liu et al. (Liu et al., 2017b), on the other hand, trained neural networks with a `1 regularization on the scaling factors in batch normalization (BN) (Ioffe & Szegedy, 2015). By pushing the factors towards zero, insignificant channels can be identified and removed. In a recent work (Liu et al., 2018c), Liu et al.suggested that many network pruning methods (Liu et al., 2017b; Li et al., 2016; Luo et al., 2017; He et al., 2017; Huang & Wang, 2018; Han et al., 2015b) can be thought of as performing network architecture search for channel numbers. In experiments, Liu et al. (Liu et al., 2018c) showed that training these pruned architectures from scratch leads to similar or even better performance than iteratively fine-tuning and pruning a large model. Thus, Liu et al. (Liu et al., 2018c) concluded that training a large, over-parameterized model is not necessary to obtain an efficient final model. In our work, we take channel pruning methods (Luo et al., 2017; He et al., 2017; 2018) as one of baselines.\nNeural Architecture Search (NAS). Recently there has been a growing interest in automating the neural network architecture design (Tan et al., 2018; Cai et al., 2018; Elsken et al., 2018; Bender et al., 2018; Pham et al., 2018; Zoph et al., 2018; Liu et al., 2018a; 2017a; 2018b; Brock et al., 2017). Significant improvements have been achieved by these automatically searched architectures in many vision and language tasks (Zoph et al., 2018; Zoph & Le, 2016). However, most neural architecture search methods (Elsken et al., 2018; Bender et al., 2018; Pham et al., 2018; Zoph et al., 2018; Liu et al., 2018a; 2017a; 2018b; Brock et al., 2017) did not include channel configuration into search space, and instead applied human-designed heuristics. More recently, the RL-based searching algorithms are also applied to prune channels (He et al., 2018) or search for filter numbers (Tan et al., 2018) directly. He et al.proposed AutoML for Model Compression (AMC) (He et al., 2018) which leveraged reinforcement learning (deep deterministic policy gradient (Lillicrap et al., 2015)) to provide the model compression policy. MNasNet (Tan et al., 2018) proposed to directly search network architectures, including filter sizes, for mobile devices. In the search, each sampled model is trained on 5 epochs using an aggressive learning rate schedule, and evaluated on a 50K validation\nset. In total, Tan et al.sampled about 8, 000 models during architecture search. Further, ProxylessNAS (Cai et al., 2018) proposed to directly learn the architectures for large-scale target tasks and target hardware platforms, based on DARTS (Liu et al., 2018b). For each residual block, ProxylessNAS (Cai et al., 2018) followed the channel configuration of MNasNet (Tan et al., 2018), while inside each block, the choices can be ×3 or ×6 version of inverted residual blocks. The memory consumption issue (Cai et al., 2018; Liu et al., 2018b) was addressed by binarizing the architecture parameters and forcing only one path to be active." }, { "heading": "2.2 SLIMMABLE NETWORKS", "text": "Slimmable networks were firstly introduced in (Yu et al., 2018). A general slimmable training algorithm and the switchable batch normalization were introduced to train a single neural network executable at different widths, permitting instant and adaptive accuracy-efficiency trade-offs at runtime. However, one drawback of the switchable batch normalization is that the width can only be chosen from a predefined widths set. The drawback was addressed in (Yu & Huang, 2019), where the authors introduced universally slimmable networks, extending slimmable networks to execute at arbitrary width, and generalizing to networks both with and without batch normalization layers. Meanwhile, two improved training techniques, the sandwich rule and inplace distillation, were proposed (Yu & Huang, 2019) to enhance training process and boost testing accuracy. Moreover, with the proposed methods, one can train nonuniform universally slimmable networks, where the width ratio is not uniformly applied to all layers. In other words, each layer in a nonuniform universally slimmable network can adjust its number of channels independently during inference. In this work, we simply refer to nonuniform universally slimmable networks as slimmable networks, if not explicitly noted. While the original motivation (Yu et al., 2018; Yu & Huang, 2019) of slimmable networks is to provide instant and adaptive accuracy-efficiency trade-offs at runtime for different devices, we present an approach that uses slimmable networks for searching channel configurations of deep neural networks." }, { "heading": "3 AUTOSLIM: NETWORK SLIMMING BY SLIMMABLE NETWORKS", "text": "In this section, we first present an overview of our proposed approach for searching channel configuration of neural networks. We then discuss and analyze the difference of our approach compared with other baselines, i.e., network pruning methods and network architecture search methods. Afterwards we present each individual module in our proposed solution and discuss its non-trivial details." }, { "heading": "3.1 OVERVIEW", "text": "The goal of channel configuration search is to optimize the number of channels in each layer, such that the network architecture with optimized channel configuration can achieve better accuracy under\nconstrained resources. The constraints can be FLOPs, latency, memory footprint or model size. Our approach is conceptually simple, and it has two essential steps:\n(1) Given a network architecture (e.g., MobileNets, ResNets), we first train a slimmable model for a few epochs (e.g., 10% to 20% of full training epochs). During the training, many different subnetworks with diverse channel configurations have been sampled and trained. Thus, after training one can directly sample its sub-network architectures for instant inference, using the correspondent computational graph and same trained weights.\n(2) Next, we iteratively evaluate the trained slimmable model on the validation set. In each iteration, we decide which layer to slim by comparing their feed-forward evaluation accuracy on validation set. We greedily slim the layer with minimal accuracy drop, until reaching the efficiency constraints. No training is required in this step.\nThe flow diagram of our approach is shown in Figure 2. Our approach is also flexible for different resource constraints, since the FLOPs, latency, memory footprint and model size are all deterministic given a channel configuration and a runtime environment. By a single pass of greedy slimming in step (2), we can obtain the (FLOPs, latency, memory footprint, model size, accuracy) tuples of different channel configurations. It is noteworthy that the latency and accuracy are relative values, since the latency may be different across different hardware and the accuracy can be improved by training the network for full epochs. In the setting of optimizing channel numbers, we benefit from these relative values as performance estimators.\nNetwork architecture\nTrain with channel sparsity regularization\nPrune channels with small scaling\nfactors\nFine-tune the pruned network Efficient network\n(a) The pipeline of network pruning methods (Liu et al., 2017b).\nNetwork architecture search space\nSearch agent\nSample and train networks\nEvaluate and estimate reward\nEfficient network architecture\nTrain agent and update policy\n(b) The pipeline of network architecture search methods (Tan et al., 2018; He et al., 2018)\nDiscussion. We compare the flow diagram of our approach with the baselines, i.e., network pruning methods and network architecture search methods.\nMany network channel pruning methods (Liu et al., 2017b; Han et al., 2015a; Luo et al., 2017; Han et al., 2015b) follow a typical iterative training-pruning-finetuning pipeline, as shown in Figure 3a. For example, Liu et al. (Liu et al., 2017b) trained neural networks with a `1 regularization on the scaling factors in batch normalization (BN). After training, the method obtains channels in which many scaling factors are near zero for pruning. Pruning will temporarily lead to accuracy loss, thus the fine-tuning process and a repetitive multi-pass procedure are introduced for enhancement of final accuracy. Compared with our approach, a notable difference is that most network channel pruning methods are grounded on the importance of trained weights, thus the slimmed layer usually consists channels of discrete index (e.g., the 4th, 7th, 9th channel are left as important channels while all others are pruned). In our approach, after slimmable training, the importance of the weight is implicitly ranked by its index. Thus our approach focuses more on the importance of channel numbers, and we always keep the lower-index channels (e.g., all 1st to 3rd channels are left while 4th to 10th channels are slimmed in step (2)). We demonstrate the advantage of our approach by empirical evidences on ImageNet classification with various network architectures.\nNetwork architecture search methods (Tan et al., 2018; Cai et al., 2018; Zoph et al., 2018; Zoph & Le, 2016) commonly consist of three major components: search space, search strategy, and performance estimation strategy. A typical pipeline is shown in Figure 3b. First the search space is defined, based on which the search agent samples network architectures. The architecture is then passed to a performance estimator, which returns rewards (e.g., predictive accuracy after training and/or network runtime latency) to the search agent. In the process, the search agent learns from the repetitive loop to design better network architectures. One major drawback of network architecture search methods is their high computational cost and time cost (Pham et al., 2018; Liu et al., 2018b). Although recently differentiable architecture search methods (Liu et al., 2018b; Luo et al., 2018) were proposed, they cannot be applied on search of channel numbers directly. Most of them (Liu et al., 2018b; Luo et al., 2018) were still using human-designed heuristics for setting channel numbers, which may introduce human bias." }, { "heading": "3.2 TRAINING SLIMMABLE NETWORKS", "text": "Warmup. We warmup by a brief review of training techniques for slimmable networks. More details can be found in (Yu et al., 2018; Yu & Huang, 2019). Slimmable networks were firstly introduced and trained with switchable batch normalization (Ioffe & Szegedy, 2015), which employed individual BNs for different sub-networks. During training, features are normalized with current mini-batch mean and variance, thus a simple modification to switchable batch normalization is introduced in (Yu & Huang, 2019): re-calibrating BN statistics after training. With this simple modification, one can train universally slimmable networks (Yu & Huang, 2019) that can run with arbitrary channel numbers. Moreover, two improved training techniques the sandwich rule and inplace distillation were introduced to enhance training process and boost testing accuracy. We use all these techniques in training slimmable models by default.\nAssumption. Our approach lies in the assumption that the slimmable model is a good accuracy estimator of individually trained models given same channel configuration. More specifically, we are interested in the relative ranking of accuracy among networks with different channel configurations. We use the instant inference accuracy of a slimmable model as the performance estimator. We note that assumptions and approximations commonly exist in other related methods. For example, in network channel pruning methods (Liu et al., 2017b; He et al., 2017), one may assume that weights with smaller norm are less informative and can be pruned, which may not be the case as shown in (Ye et al., 2018). Recently the Lottery Ticket Hypothesis (Frankle & Carbin, 2018) was also introduced. In network architecture search methods (Tan et al., 2018; Cai et al., 2018), one may believe the transferability among different datasets, accuracy approximations using aggressive learning rates and fewer training epochs, and approximation in runtime latency modeling.\nThe Search Space. The executable sub-networks in a slimmable model compose the search space of channel configurations given a network architecture. To train a slimmable model, we simply apply two width multipliers (Howard et al., 2017; Yu & Huang, 2019) as the upper bound and lower bound of channel numbers. For example, for all mobile networks (Sandler et al., 2018; Howard et al., 2017; Tan et al., 2018; Cai et al., 2018), we train a slimmable model that can execute between 0.15× and 1.5×. In each training iteration, we randomly and independently sample the number of channels in each layer. It is noteworthy that in residual networks, we first sample the channel number of residual identity pathway and then randomly and independently sample channel number inside each residual block. Moreover, we make all layers in a neural network slimmable, including the first convolution layer and last fully-connected layer. In each layer, we divide the channels into groups evenly (e.g., 10 groups) to reduce the search space. In other words, during training or slimming, we sample or remove an entire group, instead of an individual channel. We note that even with channel grouping, the search space is still large. For example in a 10-layer network with 10 channel groups in each layer, the total number of candidate channel configurations is 1010.\nWe implement a distributed training framework with synchronized stochastic gradient descent (SGD) on PyTorch (Paszke et al., 2017). We set different random seeds in different processes such that each GPU samples diverse channel configurations in each SGD training step. All other techniques introduced in (Yu et al., 2018) and distributed training techniques introduced in (Goyal et al., 2017) are used by default. All code will be released." }, { "heading": "3.3 GREEDY SLIMMING", "text": "After training a slimmable model, we evaluate it on the validation set (on ImageNet (Deng et al., 2009) we randomly hold out 50K images in training set as validation set). We start with the largest model (e.g., 1.5×) and compare the network accuracy among the architectures where each layer is slimmed by one channel group. We then greedily slim the layer with minimal accuracy drop. During the iterative slimming, we obtain optimized channel configurations under different resource constraints. We stop until reaching the strictest constraint (e.g., 50M FLOPs or 30ms CPU latency).\nLarge Batch Size. During greedy slimming, no training is involved. Thus we directly put the model in evaluation mode (no gradients are required), which enables us to use a larger batch size (for example during slimming we use mini-batch size 2048 for each GPU with totally 8 V100 GPUs). Large batch size brings two benefits. First, previous work (Yu & Huang, 2019) shows that BN statistics will be accurate if it is calibrated with the batch size larger than 2K. Thus post-statistics of BN in our greedy slimming can be computed online without additional cost. Second, with large\nbatch size we can simply use single feed-forward prediction accuracy as the performance estimator. In practice we find it speeds up greedy slimming and simplifies implementation without affecting final performance.\nTraining Optimized Networks. Similar to architecture search methods, after the search, we train these optimized network architectures from scratch. By default we search for the network FLOPs at approximately 200M, 300M and 500M, and train a slimmable model." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 MAIN RESULTS", "text": "Table 1 summarizes our results on ImageNet (Deng et al., 2009) classification with various network architectures including MobileNet v1 (Howard et al., 2017), MobileNet v2 (Sandler et al., 2018), MNasNet (Tan et al., 2018), and one large model ResNet-50 (He et al., 2016). We compare our results with their default channel configurations and recent channel pruning methods (Luo et al., 2017; He et al., 2017; 2018). The top-1 errors of our baselines are from corresponding works (Sandler et al., 2018; Howard et al., 2017; He et al., 2016; Tan et al., 2018; Luo et al., 2017; He et al., 2017; 2018). To have a clear view, we divide the network architectures into four groups, namely, 200M FLOPs, 300M FLOPs, 500M FLOPs and heavy models (basically ResNet-50 based models). We evaluate their latency on same hardware environment with single-core CPU to ensure fairness. Device memory is reported as a summary of all feature maps and weights. We note that the memory footprint can be largely optimized by improving memory reusing and implementation of dedicated operators. For example, the inverted residual block can be optimized by splitting channels into groups and performing partial execution for multiple times (Sandler et al., 2018). For all network architectures we train 50 epochs with squeezed learning rate schedule to obtain a slimmable model for greedy slimming. After search, we train the optimized network architectures for full epochs (300 epochs with linearly decaying learning rate for mobile networks, 100 epochs with step learning rate schedule for ResNet-50 based models) with other training settings following previous works (Sandler et al., 2018; Howard et al., 2017; Ma et al., 2018; Zhang et al., 2017b; He et al., 2016; Yu et al., 2018; Yu & Huang, 2019) (weight initialization, weight decay, data augmentation, training/testing image resolution, optimizer, hyper-parameters of batch normalization). We exclude the parameters and FLOPs of Batch Normalization layers (Ioffe & Szegedy, 2015) following common practice since they can be fused into convolution layers.\nAs shown in Table 1, our models have better top-1 accuracy compared with the default channel configuration of MobileNet v1, MobileNet v2 and ResNet-50 across different computational budgets. We even have improvements over RL-searched MNasNet (Tan et al., 2018), where the filter numbers are already included in its search space. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs)." }, { "heading": "4.2 VISUALIZATION AND DISCUSSION", "text": "In this part, we visualize our optimized channel configurations and discuss some insights from the results.\nComparison with Default Channel Numbers. We first compare our results with default channels in MobileNet v2 (Sandler et al., 2018). We show the optimized number of channels (left) and the percentage compared with default channels (right) in Figure 4. Compared with default MobileNet v2, our optimized configuration has fewer channels in shallow layers and more channels in deep ones.\nComparison with Width Multiplier Heuristic. Applying width multiplier (Howard et al., 2017), a global hyper-parameter across all layers, is a commonly used heuristic to trade off between model accuracy and efficiency (Sandler et al., 2018; Howard et al., 2017; Ma et al., 2018; Zhang et al., 2017b). We search optimal channels at 207M, 305M and 505M FLOPs corresponding to MobileNet\nv2 0.75×, 1.0× and 1.3×. Figure 5a shows the pattern that under different budgets, AutoSlim applies different width scaling in each layer.\nComparison with Model Pruning Methods. Next, we compare our optimized channel configuration with model pruning method AMC (He et al., 2018). In Figure 5a, we show the number of channels in all layers of optimized MobileNet v2. We observe several characteristics of our op-\ntimized channel configurations. First, AutoSlim-MobileNet-v2 has much more channels in deep layers, especially for deep depthwise convolutions. For example, AutoSlim-MobileNet-v2 has 1920 channels in the second last layer, compared with 848 channels in AMC-MobileNet-v2. Second, AutoSlim-MobileNet-v2 has fewer channels in shallow layers. For example, AutoSlim-MobileNetv2 has only 8 channels in first convolution layer, while AMC-MobileNet-v2 has 24 channels. It is noteworthy that although shallow layers have a small number of channels, the spatial size of feature maps is large. Thus overall these layers take up large computational overheads." }, { "heading": "4.3 CIFAR10 EXPERIMENTS", "text": "In addition to ImageNet dataset, we also conduct experiments on CIFAR10 (Krizhevsky, 2009) dataset. We use same weight decay hyper-parameter, initial learning rate and learning rate schedule as ImageNet experiments. We note that these training settings may not be optimal for CIFAR10\nModel Searched On FLOPs Top-1 Err.\nMobileNet v2 0.75× - 59M 8.6 AutoSlim-MobileNet v2 CIFAR10 59M 7.0 (1.6) AutoSlim-MobileNet v2 ImageNet 63M 9.9 (-1.3)\ndataset, nevertheless we report ablative study with same hyper-parameters and settings. We first report the performance of MobileNet v2 (Sandler et al., 2018) with the default channel configurations. We then search with proposed AutoSlim to obtain optimized channel configurations at same FLOPs (we hold out 5K images from training set as validation set during the search). Finally we train the optimized architectures individually with same settings as the baselines. Table 2 shows that AutoSlim models have higher accuracy than baselines on CIFAR10 dataset.\nWe further study the transferability of the network architectures learned from ImageNet to CIFAR10 dataset, and compare it with the channel configuration searched on CIFAR10 directly. The results are shown in Table 3. It suggests that the optimized channel configuration on ImageNet cannot generalize to CIFAR10. Compared with the optimized architecture for ImageNet, we observed that the optimized architecture for CIFAR10 have much fewer channels in deep layers, which we guess may lead to better generalization on test set for small datasets like CIFAR10. It may also due to the inconsistent image resolutions between ImageNet (224× 224) and CIFAR10 (32× 32)." }, { "heading": "5 CONCLUSION", "text": "We presented, AutoSlim, a simple and one-shot approach on neural architecture search for the number of channels to achieve better accuracy under constrained resources. We demonstrated the effectiveness of AutoSlim with extensive experiments on large-scale ImageNet classification and various network backbones including MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet. AutoSlim achieved significant improvements (with much lower search cost) compared with three categories of baselines: the human-designed heuristics, channel pruning methods and architecture search methods based on reinforcement learning. Our proposed solution AutoSlim automates the design of channel configurations in a neural network for resource constrained devices." } ]
2,019
null
SP:6c5368ae026fc1aaf92bdc208d90e4eec999575a
[ "This paper presents an end-to-end approach for clustering. The proposed model is called CNC. It simultaneously learns a data embedding that preserve data affinity using Siamese networks, and clusters data in the embedding space. The model is trained by minimizing a differentiable loss function that is derived from normalized cuts. As such, the embedding phase renders the data point friendly to spectral clustering. ", "The paper suggests a differentiable objective that can be used to train a network to output cluster probabilities for a given datapoint, given a fixed number of clusters and embeddings of the data points to be clustered. In particular, this objective can be seen as a relaxation of the normalized cut objective, where indicator variables in the original formulation are replaced with their expectations under the trained model. The authors experiment with a number of clustering datasets where the number of cluster is known beforehand (and where, for evaluation purposes, the ground truth is known), and find that their method generally improves over the clustering performance of SpectralNet (Shaham et al., 2018) in terms of accuracy and normalized mutual information, and that it finds solutions with lower normalized cut values." ]
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize.
[]
[ { "authors": [ "Reid Andersen", "Fan Chung", "Kevin Lang" ], "title": "Local graph partitioning using pagerank vectors", "venue": "In FOCS,", "year": 2006 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "Journal of machine learning research,", "year": 2006 }, { "authors": [ "Matan Ben-Yosef", "Daphna Weinshall" ], "title": "Gaussian mixture generative adversarial networks for diverse datasets, and the unsupervised clustering of images", "venue": null, "year": 2018 }, { "authors": [ "Yoshua Bengio", "Jean-François Paiement", "Pascal Vincent", "Olivier Delalleau", "Nicolas Le Roux", "Marie Ouimet" ], "title": "Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering", "venue": "In Proceedings of the 16th International Conference on Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "Deng Cai", "Xiaofei He", "Jiawei Han" ], "title": "Locally consistent concept factorization for document clustering", "venue": "IEEE Trans. on Knowl. and Data Eng.,", "year": 2011 }, { "authors": [ "Xiaojun Chang", "Feiping Nie", "Zhigang Ma", "Yi Yang" ], "title": "Balanced k-means and min-cut clustering", "venue": "arXiv preprint arXiv:1411.6235,", "year": 2014 }, { "authors": [ "Xiaojun Chen", "Joshua Zhexue Huang", "Feiping Nie", "Renjie Chen", "Qingyao Wu" ], "title": "A self-balanced min-cut algorithm for image clustering", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Fan Chung" ], "title": "Four proofs for the cheeger inequality and graph partition algorithms", "venue": "In Proceedings of ICCM,", "year": 2007 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Yee Whye Teh and Mike Titterington (eds.), AISTATS,", "year": 2010 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Xu Ji", "João F. Henriques", "Andrea Vedaldi" ], "title": "Invariant information distillation for unsupervised image segmentation and clustering", "venue": "arXiv preprint arXiv:", "year": 2019 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "George Karypis", "Vipin Kumar" ], "title": "Multilevelk-way partitioning scheme for irregular graphs", "venue": "Journal of Parallel and Distributed computing,", "year": 1998 }, { "authors": [ "George Karypis", "Vipin Kumar" ], "title": "Multilevel k-way hypergraph partitioning", "venue": "VLSI design,", "year": 2000 }, { "authors": [ "George Karypis", "Rajat Aggarwal", "Vipin Kumar", "Shashi Shekhar" ], "title": "Multilevel hypergraph partitioning: applications in vlsi domain", "venue": "IEEE T VLSI SYST,", "year": 1999 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Jure Leskovec" ], "title": "Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters", "venue": "Internet Mathematics,", "year": 2009 }, { "authors": [ "Hanyang Liu", "Junwei Han", "Feiping Nie", "Xuelong Li" ], "title": "Balanced clustering with least square regression", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Arturo Mendoza Quispe", "Caroline Petitjean", "Laurent Heutte" ], "title": "Extreme learning machine for out-of-sample extension in laplacian eigenmaps", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Pekka Miettinen", "Mikko Honkala", "Janne Roos" ], "title": "Using METIS and hMETIS algorithms in circuit partitioning", "venue": "Helsinki University of Technology,", "year": 2006 }, { "authors": [ "James R. Munkres" ], "title": "Algorithms for the Assignment and Transportation Problems", "venue": "Journal of the Society for Industrial and Applied Mathematics,", "year": 1957 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan", "Yair Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Peter Sanders", "Christian Schulz" ], "title": "High quality graph partitioning. In Graph Partitioning and Graph Clustering, volume 588 of Contemporary Mathematics, pp. 1–18", "venue": "American Mathematical Society,", "year": 2012 }, { "authors": [ "Peter Sanders", "Christian Schulz" ], "title": "Distributed evolutionary graph partitioning", "venue": "In Proceedings of the Meeting on Algorithm Engineering & Expermiments,", "year": 2012 }, { "authors": [ "Peter Sanders", "Christian Schulz" ], "title": "Think Locally, Act Globally: Highly Balanced Graph Partitioning", "venue": "In Proceedings of the 12th International Symposium on Experimental Algorithms (SEA’13),", "year": 2013 }, { "authors": [ "Uri Shaham", "Roy R. Lederman" ], "title": "Learning by coincidence: Siamese networks and common variable learning", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Uri Shaham", "Kelly Stanton", "Henry Li", "Ronen Basri", "Boaz Nadler", "Yuval Kluger" ], "title": "Spectralnet: Spectral clustering using deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jianbo Shi", "Jitendra Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2000 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-v4, inceptionresnet and the impact of residual connections on learning", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Ulrike Von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and computing,", "year": 2007 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Bo Yang", "Xiao Fu", "Nicholas D Sidiropoulos", "Mingyi Hong" ], "title": "Towards k-means-friendly spaces: Simultaneous deep learning and clustering", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yilin Zhang", "Karl Rohe" ], "title": "Understanding regularized spectral clustering via graph conductance", "venue": "In NeurIPS,", "year": 2018 } ]
[ { "heading": null, "text": "We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to the expected normalized cuts. Unlike much of the work in unsupervised deep learning, our trained model directly outputs final cluster assignments, rather than embeddings that need further processing to be usable. Our approach generalizes to unseen datasets across a wide variety of domains, including text, and image. Specifically, we achieve state-of-the-art results on popular unsupervised clustering benchmarks (e.g., MNIST, Reuters, CIFAR-10, and CIFAR-100), outperforming the strongest baselines by up to 10.9%. Our generalization results are superior (by up to 21.9%) to the recent top-performing clustering approach with the ability to generalize." }, { "heading": "1 INTRODUCTION", "text": "Clustering unlabeled data is an important problem from both a scientific and practical perspective. As technology plays a larger role in daily life, the volume of available data has exploded. However, labeling this data remains very costly and often requires domain expertise. Therefore, unsupervised clustering methods are one of the few viable approaches to gain insight into the structure of these massive unlabeled datasets.\nOne of the most popular clustering methods is spectral clustering (Shi & Malik, 2000; Ng et al., 2002; Von Luxburg, 2007), which first embeds the similarity of each pair of data points in the Laplacian’s eigenspace and then uses k-means to generate clusters from it. Spectral clustering not only outperforms commonly used clustering methods, such as k-means (Von Luxburg, 2007), but also allows us to directly minimize the pairwise distance between data points and solve for the optimal node embeddings analytically. Moreover, it is shown that the eigenvector of the normalized Laplacian matrix can be used to find the approximate solution to the well known normalized cuts problem (Ng et al., 2002; Von Luxburg, 2007).\nIn this work, we introduce CNC, a framework for Clustering by learning to optimize expected Normalized Cuts. We show that by directly minimizing a continuous relaxation of the normalized cuts problem, CNC enables end-to-end learning approach that outperforms top-performing clustering approaches. We demonstrate that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet, which consequently results in better clustering accuracy.\nLet us motivate CNC through a simple example. In Figure 1, we want to cluster 6 images from CIFAR-10 dataset into two clusters. The affinity graph for these data points is shown in Figure 1(a) (details of constructing such graph is discussed in Section 4.2). In this example, it is obvious that the optimal clustering is the result of cutting the edge connecting the two triangles. Cutting this edge will result in the optimal value for the normalized cuts objective. In CNC, we define a new differentiable loss function equivalent to the expected normalized cuts objective. We train a deep learning model to minimize the proposed loss in an unsupervised manner without the need for any labeled datasets. Our trained model directly returns the probabilities of belonging to each cluster (Figure 1(b)). In this example, the optimal normalized cuts is 0.286 (Equation 1), and as we can see, the CNC loss also converges to this value (Figure 1(c)).\nWe compare the performance of CNC to several learning-based clustering approaches (SpectralNet (Shaham et al., 2018), DEC (Xie et al., 2016), DCN (Yang et al., 2017), VaDE (Jiang et al., 2017), DEPICT (Ghasedi Dizaji et al., 2017), IMSAT (Hu et al., 2017), and IIC (Ji et al., 2019)) on four datasets: MNIST, Reuters, CIFAR10, and CIFAR100. Our results show up to 10.9% improvement over the baselines. Moreover, generalizing spectral embeddings to unseen data points, a task commonly referred to as out-of-sample-extension (OOSE), is a non-trivial task (Bengio et al., 2003; Belkin et al., 2006; Mendoza Quispe et al., 2016). Our results confirm that CNC generalizes to unseen data. Our generalization results are superior (by up to 21.9%) to SpectralNet (Shaham et al., 2018), the recent top-performing clustering approach with the ability to generalize." }, { "heading": "2 RELATED WORK", "text": "Recent deep learning approaches to clustering attempt to embed the input data into a form that is amenable to clustering by k-means or Gaussian Mixture Models. (Yang et al., 2017; Xie et al., 2016) focused on learning representations for clustering. To find the clustering-friendly latent representations and to better cluster the data, DCN (Yang et al., 2017) proposed a joint dimensionality reduction (DR) and K-means clustering approach in which DR is accomplished via learning a deep neural network. DEC (Xie et al., 2016) simultaneously learns cluster assignment and the underlying feature representation by iteratively updating a target distribution to sharpen cluster associations.\nSeveral other approaches rely on a variational autoencoder that utilizes a Gaussian mixture prior (Jiang et al., 2017; Dilokthanakul et al., 2016; Hu et al., 2017; Ji et al., 2019; Ben-Yosef & Weinshall, 2018). These approaches are mainly based on data augmentation, where the network is trained to maximize the mutual information between inputs and predicted clusters, while regularizing the network so that the cluster assignment of the data points is consistent with the assignment of the augmented points.\nDifferent clustering objectives, such as self-balanced k-means and balanced min-cut, have also been exhaustively studied (Liu et al., 2017; Chen et al., 2017; Chang et al., 2014). One of the most effective techniques is spectral clustering, which first generates node embeddings in the eigenspace of the graph Laplacian, and then applies k-means clustering to these vectors (Shi & Malik, 2000; Ng et al., 2002; Von Luxburg, 2007). To address the fact that clusters with the lowest graph conductance tend to have few nodes (Leskovec, 2009; Zhang & Rohe, 2018), (Zhang & Rohe, 2018) proposed regularized spectral clustering to encourage more balanced clusters.\nGeneralizing clustering to unseen nodes and graphs is nontrivial (Bengio et al., 2003; Belkin et al., 2006; Mendoza Quispe et al., 2016). A recent work, SpectralNet (Shaham et al., 2018), takes a deep learning approach to spectral clustering that generalizes to unseen data points. This approach first learns embeddings of the similarity of each pair of data points in Laplacian’s eigenspace and then applies k-means to those embeddings to generate clusters. Unlike SpectralNet, we propose an end-to-end learning approach with a differentiable loss that directly minimizes the normalized cuts. We show that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet, which consequently results in better clustering accuracy. Our evaluation results show that CNC improves generalization accuracy on unseen data points by up to 21.9%." }, { "heading": "3 PRELIMINARIES", "text": "Since CNC objective is based on optimizing normalized cuts, in this section, we briefly overview the formal definition of this metric." }, { "heading": "3.1 FORMAL DEFINITION OF NORMALIZED CUTS", "text": "Let G = (V,E,W ) be a graph where V = {vi} and E = {e(vi, vj)|vi ∈ V, vj ∈ V } are the set of nodes and edges in the graph and wij ∈W is the edge weight of the e(vi, vj). Let n be the number of nodes. A graph G can be clustered into g disjoint sets S1, S2, . . . Sg , where the union of the nodes in those sets are V ( ⋃g k=1 Sk = V ), and each node belongs to only one set ( ⋂g k=1 Sk = ∅), by simply removing edges connecting those sets. For example, in Figure 1(a), by removing one edge two disjoint clusters are formed.\nNormalized cuts (Ncuts) which is defined based on the graph conductance, has been studied by (Shi & Malik, 2000; Zhang & Rohe, 2018), and the cost of a cut that forms disjoint sets S1, S2, . . . Sg is computed as:\nNcuts(S1, S2, . . . Sg) = g∑ k=1 cut(Sk, S̄k) vol(Sk, V )\n(1)\nWhere S̄k represents the complement of Sk, i.e., S̄k = ⋃ i6=k Si. cut(Sk, S̄k) is called cut and is the total weight of the edges that are removed fromG in order to form disjoint sets Sk and S̄k. vol(Sk, V ) is the total edge weights (wij), whose end points (vi, or vj) belong to Sk. The cut and vol are:\ncut(Sk, S̄k) = ∑\nvi∈Sk,vj∈S̄k wij , vol(Sk, V ) = ∑ vi∈Sk ∑ vj∈V wij (2)\nNote that in Equation 2, Sk and S̄k are disjoint, i.e., Sk ∩ S̄k = ∅, while in vol, Sk ⊂ V . In running example (Figure 1), since the edge weights are one, cut(S1, S̄1) = cut(S2, S̄2) = 1, and vol(S1, V ) = vol(S2, V ) = 2 + 2 + 3 = 7. Thus the Ncuts(S1, S2) = 17 + 1 7 = 0.286. In this example one can see that such clustering results in minimum value of the normalized cuts. CNC aims to find a cut that the normalized cuts (Equation 1) is minimized." }, { "heading": "4 CNC FRAMEWORK", "text": "Finding the cluster assignments that minimizes the normalized cuts is NP-complete and an approximation to the this problem is based on the eigenvectors of the normalized graph Laplacian which has been studied in (Shi & Malik, 2000; Zhang & Rohe, 2018). CNC, on the other hand, is a neural network framework for learning to cluster in the absence of labeled examples by directly minimizing the continuous relaxation of the normalized cuts. As shown in Algorithm 1, end-to-end training of the CNC contains two steps, i.e, (i) data points embedding (line 3), and (ii) clustering (lines 4-9). In data points embedding, the goal is to learn embeddings that capture the affinity of the data points, while the clustering step uses those embeddings to learn the CNC model and outputs the cluster assignments. Next, we first focus on the clustering step and we introduce our new differentiable loss function to train CNC model. Later in Section 4.2, we discuss the details of the embedding step." }, { "heading": "4.1 CLUSTERING STEP: LEARN CNC MODEL", "text": "In this section, we describe the clustering step in Algorithm 1 (lines 4-9). For each data point xi, the input to clustering step is embedding vi ∈ Rd (detail in Section 4.2). The goal is to learn CNC model Fθ : Rd → Rg that for a given embedding vi ∈ Rd it returns yi = Fθ(vi) ∈ Rg, which represents the assignment probabilities over g clusters. Clearly for n data points, it returns Y ∈ Rn×g where Yik represents the probability that vi belongs to cluster Sk. The CNC model Fθ is implemented using a neural network, where the parameter vector θ denotes the network weights. We propose a loss function based on output Y to calculate the expected normalized cuts. Thus CNC learns the Fθ by minimizing this loss (Equation 7).\nRecall that cut(Sk, S̄k) is the total weight of the edges that are removed from G in order to form disjoint sets Sk and S̄k. In our setup, embeddings are the nodes in graph G, and neighbors of an embedding vi are based on the k-nearest neighbors. Let Yik be the probability that node vi belongs to cluster Sk. The probability that node vj does not belong to Sk would be 1 − Yjk. Therefore, E[cut(Sk, S̄k)] can be formulated by Equation 3, where N (vi) is the set of nodes adjacent to vi.\nAlgorithm 1 End-to-End Training of CNC: Clustering by learning to optimize expected Normalized Cuts\n1: Input: dataset X ⊆ Rm, number of clusters g, data point embedding size d, batch size b 2: Output: Cluster assignments of data points.\nPreprocessing step, learn data points embedding (details in Section 4.2): 3: Given a dataset X = {x1, . . . xn}, train a Siamese network to find embeddings {v1, . . . vn}, vi ∈ Rd that represent the affinity of the data points. Gθsiamese : Rm → Rd Clustering step, learn CNC model Fθ (details in Section 4.1): 4: while CNC loss in Equation 6 not converged do 5: Sample a random minibatch M of size b from the embeddings 6: Compute affinity graph W ∈ Rb×b over the M based on the k-nearest neighbors 7: UseM andW to train CNC model Fθ : Rd → Rg that minimizes the expected normalized cuts\n(Equation 6) via backpropagation. For a data point with embedding vi the output yi = Fθ(vi) represents the assignment probabilities over g clusters.\n8: end while Inference, cluster assignments 9: For every data points xi whose embedding is vi return arg max of yi = Fθ(vi) as its cluster assignment.\nEY∼Fθ [cut(Sk, S̄k)]= ∑ vi∈Sk ∑ vj∈N (vi) wijYik(1− Yjk) (3)\nSince the weight matrix W represents the edge weights adjacent nodes, we can rewrite Equation 3: EY∼Fθ [cut(Sk, S̄k)] = ∑\nreduce-sum\nY:,k(1− Y:,k)ᵀ W (4)\nThe element-wise product with the weight matrix ( W ) ensures that only the adjacent nodes are considered. Moreover, the result of Y:,k(1− Y:,k)ᵀ W is an n× n matrix and reduce-sum is the sum over all of its elements. From Equation 2, vol(Sk, V ) is the total edge weights (wij), whose end points (vi, or vj) belong to Sk. Let D be a column vector of size n where Di is the total edge weights from node vi. We can update Equation 3 as follows to find the expected normalized cuts.\nEY∼Fθ [Ncuts]= g∑ k=1 ∑ vi∈Sk ∑ vj∈N (vi) wijYik(1− Yjk)∑ vl∈V YlkDl\n(5)\nThe matrix representation is given in Equation 6, where Γ = Y ᵀD is a vector in Rg, and g is the number of sets/clusters. is element-wise division and the result of (Y Γ)(1 − Y )ᵀ W is a n× n matrix where reduce-sum is the sum over all of its elements.\nEY∼Fθ [Ncuts]= ∑\nreduce-sum g∑ k=1 Y:,k Γk (1− Y:,k)ᵀ W\n= ∑\nreduce-sum\n(Y Γ)(1− Y )ᵀ W (6)\nCNC model Fθ is implemented using a neural network, where the parameter θ denotes the network weights (yi = Fθ(vi)). CNC is trained to optimize Equation 7 via backpropagation (Algorithm 1).\narg min θ ∑ reduce-sum (Y Γ)(1− Y )ᵀ W (7)\nAs you can see the affinity graph W is part of the CNC loss (Equation 7). Clearly, when the number of data points (n) is large, such calculation can be expensive. However, in our experimental results, we show that for large dataset (e.g., Reuters contains 685,071 documents), it is possible to optimize the loss on randomly sampled minibatches of data. We also build the affinity graph over a given minibach using the embeddings and based on their k nearest-neighbor (Algorithm 1 (lines 5-6)). Specifically, in our implementation, CNC model Fθ is a fully connected layer followed by gumble softmax, trained on randomly sampled minibatches of data to minimize Equation 6. In Section 5.7 through a sensitivity analysis we show that the minibatch size affects the accuracy of our model. When training is over, the final assignment of a data point with embedding vi to a cluster is the arg max of yi = Fθ(vi) (Algorithm 1 (line 9))." }, { "heading": "4.2 EMBEDDING STEP", "text": "In this section, we discuss the embedding step (line 3 in Algorithm 1). Different affinity measures, such as simple euclidean distance or nearest neighbor pairs combined with a Gaussian kernel, have been used in spectral clustering. Recently it is shown that unsupervised application of a Siamese network to determine the distances improves the quality of the clustering (Shaham et al., 2018).\nIn this work, we also use Siamese networks to learn embeddings that capture the affinities of the data points. Siamese network is trained to learn an adaptive nearest neighbor metric. It learns the affinities directly from euclidean proximity by ”labeling” points xi, xj positive if ‖xi − xj‖ is small and negative otherwise. In other words, it generates embeddings such that adjacent nodes are closer in the embedding space and non-adjacent nodes are further. Such network is typically trained to minimize contrastive loss:\nLsiamese = { ||vi − vj ||2, (xi, xj) is a positive pair max(1− ||vi − vj ||2, 0)2, (xi, xj) is a negative pair\nwhere vi = Gθsiamese(xi), andGθsiamese : Rm → Rd is a Siamese network that transforms representations in the input space xi ∈ Rm to embeddings vi ∈ Rd." }, { "heading": "5 EXPERIMENTS", "text": "The main goals of our experiments are to evaluate: (a) The performance of CNC against the existing clustering approaches. (b) The ability of CNC to generalize to unseen data compared to the topperforming generalizable baseline. (c) The effectiveness of minimizing Normalized cuts on improving the clustering results. (d) The generalization performance of CNC as we vary the number of data points in training dataset." }, { "heading": "5.1 DATASETS AND BASELINE METHODS", "text": "We evaluate the performance of CNC in comparison to several deep learning-based clustering approaches on four real world datasets: MNIST, Reuters, CIFAR-10, and CIFAR-100. The details of the datasets are as follows: • MNIST is a collection of 70,000 28×28 gray-scale images of handwritten digits, divided into\n60,000 training images and 10,000 test images. • The Reuters dataset is a collection of English news labeled by category. Like SpectralNet, DEC,\nand VaDE, we used the following categories: corporate/industrial, government/social, markets, and economics as labels and discarded all documents with multiple labels. Each article is represented by a tfidf vector using the 2000 most frequent words. The dataset contains 685,071 documents. We divided the data randomly to a 90%-10% split to evaluate the generalization ability of CNC. We also investigate the imapact of training data size on the generalization by considering following splits: 90%-10%, 70%-30%, 50%-50%, 20%-80%, and 10%-90%.\n• CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\n• CIFAR-100 has 100 classes containing 600 images each with a 500/100 train/test split per class.\nIn all runs we assume the number of clusters is given. In MNIST and CIFAR-10 number of clusters (g) is 10, g = 4 in Reuters, g = 100 in CIFAR-100. We compare CNC to SpectralNet (Shaham et al., 2018), DEC (Xie et al., 2016), DCN (Yang et al., 2017), VaDE (Jiang et al., 2017), DEPICT (Ghasedi Dizaji et al., 2017), IMSAT (Hu et al., 2017), and IIC (Ji et al., 2019). While (Yang et al., 2017; Xie et al., 2016) focused on learning representations for clustering, other approaches (Jiang et al., 2017; Dilokthanakul et al., 2016; Hu et al., 2017; Ji et al., 2019; Ben-Yosef & Weinshall, 2018) rely on a variational autoencoder that utilizes a Gaussian mixture prior. SpectralNet (Shaham et al., 2018), takes a deep learning approach to spectral clustering that generalizes to unseen data points. Table 1 shows the results reported for these six methods.\nSimilar to (Shaham et al., 2018), for MNIST and Reuters we use publicly available and pre-trained autoencoders1. The autoencoder used to map the Reuters data to code space was trained based on a random subset of 10,000 samples from the full dataset. Similar to (Hu et al., 2017), for CIFAR-10 and CIFAR-100 we applied 50-layer pre-trained deep residual networks trained on ImageNet to extract features and used them for clustering.\n1https://github.com/slim1017/VaDE/tree/master/pretrain_weights" }, { "heading": "5.2 PERFORMANCE MEASURES", "text": "We use two commonly used measures, the unsupervised clustering accuracy (ACC), and the normalized mutual information (NMI) in (Cai et al., 2011) to evaluate the accuracy of the clustering. Both ACC and NMI are in [0, 1], with higher values indicating better correspondence the clusters and the true labels. Note that true labels never used neither in training, nor in test.\nClustering Accuracy (ACC): For data pointsX = {x1, . . . xn}, let l = (l1, . . . ln) and c = (c1, . . . cn) be the true labels and predicted clusters respectively. The ACC is defined as:\nACC(l, c) = 1\nn max π∈ ∏ n∑ i=1 1{li = π(ci)}\nwhere ∏\nis the collection of all permutations of 1, . . . g. The optimal permutation π can be computed using the Kuhn-Munkres algorithm (Munkres, 1957).\nNormalized Mutual Information (NMI): Let I(l; c) be the mutual information between l and c, and H(.) be their entropy. The NMI is:\nNMI(l, c) = I(l; c)\nmax{H(l), H(c)}" }, { "heading": "5.3 EXPERIMENTAL RESULTS", "text": "For each dataset we trained a Siamese network (Hadsell et al., 2006; Shaham & Lederman, 2018) to learn embeddings which represents the affinity of data points by only considering the k-nearest neighbors of each data. In Table 1, we compare clustering performance across four benchmark datasets. Since most of the clustering approaches do not generalize to unseen data points, all data has been used for the training (Later in Section 5.4, to evaluate the generalizability we use 90%-10% split for training and testing).\nWhile the improvement of CNC is marginal over MNIST, it performs better across other three datasets. Specifically, over CIFAR-10, CNC outperforms SpectralNet and IIC on ACC by 20.1% and 10.9% respectively. Moreover, the NMI is improved by 12.3%. The results over Reuters, and CIFAR-100, show 0.021% and 11% improvement on ACC. The NMI is also 27% better over CIFAR-100. The fact that our CNC outperforms existing approaches in most datasets suggests the effectiveness of using our deep learning approach to optimize normalized cuts for clustering.\nWe performed an ablation study to evaluate the impact of embeddings by omitting this step in Algorithm 1. We find that on both MNIST and Reuters datasets, adding the embedding step improves the performance, but CNC without embeddings still outperforms SpectralNet without embeddings. On MNIST, the ACC and NMI are 0.945 and 0.873, whereas with the embeddings, ACC and NMI increase to 0.972 and 0.924 (Table 1). Without embeddings, CNC outperforms SpectralNet (with ACC of 0.8 and NMI of 0.814). On Reuters, the ACC and NMI are 0.684 and 0.428, whereas with the embeddings, ACC and NMI increase to 0.824 and 0.583. Again, even without embeddings, CNC outperforms SpectralNet (with ACC of 0.605 and NMI of 0.401)." }, { "heading": "5.4 GENERALIZATION", "text": "We further evaluate the generalization ability of CNC by dividing the data randomly to a 90%- 10% split and training on the training set and report the ACC and NMI on the test set (Table 2). Among seven methods in Table 1, only SpectralNet is able to generalize to unseen data points. CNC outperforms SpectralNet in most datasets by up to 21.9% on ACC and up to 10.7% on NMI. Note that simple arg max over the output of CNC retrieves the clustering assignments while SpectralNet relies on k-means to predict the final clusters." }, { "heading": "5.5 IMPACT OF NORMALIZED CUTS IN CLUSTERING", "text": "To evaluate the impact of normalized cuts for the clustering task, we calculate the numerical value of the Normalized cuts (Equation 1) over the clustering results of the CNC and SpectralNet. Since such calculation over whole dataset is very expensive we only show this result over the test set.\nTable 3 shows the numerical value of the Normalized cuts over the clustering results of the CNC and SpectralNet. As one can see CNC is able to find better cuts than the SpectralNet. Moreover, we observe that for those datasets that the improvement of the CNC is marginal (MNIST and Reuters), the normalized cuts of CNC are also only slightly better than the SpectralNet, while for the CIFAR10 and CIFAR-100 that the accuracy improved significantly the normalized cuts of CNC are also much smaller than SpectralNet. The higher accuracy (ACC in Table 2) and smaller normalized cuts (Table 3), verify that indeed CNC loss function is a good notion for clustering task." }, { "heading": "5.6 IMAPACT OF TRAINING DATA SIZE ON THE GENERALIZATION", "text": "As you may see in generalization result (Table 2), when we reduce the size of the training data to 90% the accuracy of CNC slightly changed in compare to training over the whole data (Table 1). Based on this observation, we next investigate how varying the size of the training dataset affects the generalization. In other words, how ACC and NMI of test data change when we vary the size of the training dataset.\nWe ran experiment over Routers dataset by dividing the data randomly based on the following data splits: 90%-10%, 70%-30%, 50%-50%, 20%-80%, and 10%-90%. For example, in 10%-90%, we train CNC over 10% of the data and we report the ACC and NMI of CNC over the 90% test set. Figure 3 shows how the ACC and NMI of CNC over the test data change as the size of the training data is varied. For example, when the size of the training data is 90%, the ACC of CNC over the test data is 0.824.\nAs we expected and shown in Figure 3 the ACC and NMI of CNC increased as the size of the training data is increased. Interestingly, we observed that with only 10% training data the ACC of CNC is 0.68 which is only 14% lower than the ACC with 90% training data. Similarly the NMI of CNC with 10% training data is only 18% lower than the NMI with 90% training data.\nFigure 2: Normalized cuts (right axis) and clustering accuracy (left axis) are anticorrelated. Lower normalized cuts results in better accuracy.\nACC NMI 90 0.824 0.586 70 0.797 0.527 50 0.757 0.497 20 0.718 0.456 10 0.684 0.409\nmin_start sec_start min_end sec_end Total secend total min 90 0 9 13 14 785 13.08333333 50 8 58 16 46 468 7.8 20 33 43 39 14 331 5.516666667 10 37 57 41 38 221 3.683333333\n50 26 29 45 48 1159 19.31666667 20 39 39 53 5 806 13.43333333 10 43 45 55 21 696 11.6\n90 1 52 50 0 12 8 1158 19.3\nFigure 3: Reuters: with only 10% training data the ACC and NMI of CNC are only 14% and 18% lower than ACC and NMI with 90% training data.\nFigure 4: Reuters: CNC is trained by fixing some parameters and varying others. With lr = 5e − 4, b = 128, k = 3: ACC is 0.821±4e−3." }, { "heading": "5.7 MODEL ARCHITECTURE AND HYPER-PARAMETERS:", "text": "Here are the details of the CNC model for each dataset. • MNIST: The Siamese network has 4 layers sized [1024, 1024, 512, 10] with ReLU (Embedding\nsize d is 10). The clustering module has 2 layers sized [512, 512] with a final gumbel softmax layer. Batch sized is 256 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 0.005 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 0.5.\n• Reuters: The Siamese network has 3 layers sized [512, 256, 128] with ReLU (Embedding size d is 128). The clustering module has 3 layers sized [512, 512, 512] with tanh activation and a final gumbel softmax layer. Batch sized is 128 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-4 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 1.0.\n• CIFAR-10: The Siamese network has 2 layers sized [512, 256] with ReLU (Embedding size d is 256). The clustering module has 2 layers sized [512, 512] with tanh activation and a final gumbel softmax layer. Batch sized is 256 and we only consider 2 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-4 with decay 0.1. Temperature starts at 2.5 and the minimum is set to 0.5.\n• CIFAR-100: The Siamese network has 2 layers sized [512, 256] with ReLU (Embedding size d is 256). The clustering module has 3 layers sized [512, 512, 512] with tanh activation and a final gumbel softmax layer. Batch sized is 1024 and we only consider 3 nearest neighbors to find the embeddings and constructing the affinity graph for each batch. We use Adam with lr = 1e-3 with decay 0.5. Temperature starts at 1.5 and the minimum is set to 1.0.\nHyper-parameter Sensitivity: We train the CNC on the Reuters dataset by fixing some hyperparameters and varying others. We noticed that CNC benefits from tuning the number of hidden layers (hl), learning rate (lr), batch size (b), and the number of nearest neighbors (k), but is not particularly sensitive to any of the other hyper-parameters, including decay rate, patience parameter (cadence of epochs where decay is applied), Gumbel-Softmax temperature or minimum temperature (Figure 4). More precisely, we varied decay rate over the range [0.1-1.0], patience from [5-25] epochs, Gumbel-Softmax temperature from [1.0-2.0], and minimum temperature from [0.5-1.0]. When we fix hl=3, lr=5e-5, b=64, and k=3, the average accuracy is 0.803 ± 2e − 3. With hl=3, lr=5e-4, b=512, and k=10, the average accuracy is 0.811± 2e− 3. With hl=3, lr=5e-4, b=128, and k=3, the average accuracy is 0.821 ± 4e − 3. With hl=2, lr=1e-4, b=256, and k=3, the average accuracy is 0.766± 9e− 4. And finally with hl=4, lr=1e-5, b=512, and k=3, the average accuracy is 0.766± 7e− 3. As one can see, the accuracy varied from 0.766 to 0.821." }, { "heading": "6 CONCLUSION", "text": "We propose CNC (Clustering by learning to optimize Normalized Cuts), a framework for learning to cluster unlabeled examples. We define a differentiable loss function equivalent to the expected normalized cuts and use it to train CNC model that directly outputs final cluster assignments. CNC achieves state-of-the-art results on popular unsupervised clustering benchmarks (MNIST, Reuters, CIFAR-10, and CIFAR-100 and outperforms the strongest baselines by up to 10.9%. CNC also enables generation, yielding up to 21.9% improvement over SpectralNet (Shaham et al., 2018), the previous best-performing generalizable clustering approach." }, { "heading": "7 APPENDIX", "text": "Graph partitioning is a well-studied subject in computer science (Karypis & Kumar, 2000; Karypis et al., 1999; Karypis & Kumar, 1998; Miettinen et al., 2006; Sanders & Schulz, 2013; Andersen et al., 2006; Chung, 2007) with numerous applications in computer vision, VLSI design, biology, social networks, transportation networks and more.\nIn this section, we evaluate the performance of the CNC to partition computational graphs. In such graphs, nodes represent operations (e.g., MatMul, Conv2d, Sum) and edges represent the computational flow. Partitioning of the computational graph can be used for efficient mapping of the computation across the underlying hardware (e.g., CPUs and GPUs). Minimizing the normalized cuts translates to less communication between devices and balanced computation on each device." }, { "heading": "7.1 DATASETS AND BASELINE METHODS", "text": "We conducted experiments on five widely used TensorFlow computation graphs: ResNet, Inceptionv3, AlexNet, MNIST-conv, and MNIST-conv. Clustering of operations can be used for the mapping of the computation across the underlying hardware (e.g., CPUs and GPUs).We use the open source partitioners hMETIS (Karypis & Kumar, 2000) and KaHIP (Sanders & Schulz, 2013), a family of graph partitioning programs based on (Sanders & Schulz, 2012b;a) to find high quality ground truth partitions on TensorFlow computation graphs. More specifically, KaHIP includes KaFFPa (Karlsruhe Fast Flow Partitioner) and KaFFPaE (KaFFPaEvolutionary). KaFFPaE is a parallel evolutionary algorithm that uses KaFFPa’s combine and mutation operations, as well as KaBaPE which extends the evolutionary algorithm. We compare the generalization results of CNC against the best results among the partitioners. The details of the datasets are as follows:\n• ResNet (He et al., 2016) is a deep convolutional network with residual connections. The TensorFlow implementation of ResNet_v1_50 with 50 layers contains 20,586 operations.\n• Inception-v3 (Szegedy et al., 2017) consists of multiple blocks, each composed of several convolutional and pooling layers. The TensorFlow graph of this model contains 27,114 operations.\n• AlexNet (Krizhevsky et al., 2012) consists of 5 convolutional layers, some of which are followed by maxpool layers, and 3 dense layers with a final softmax. The TensorFlow graph of this model has 798 operations.\n• MNIST-conv has 3 convolutional layers. Its TensorFlow graph contains 414 operations.\n• VGG (Simonyan & Zisserman, 2014) has 16 convolutional layers. The TensorFlow graph of VGG contains 1,325 operations." }, { "heading": "7.2 PERFORMANCE MEASURES", "text": "To evaluate the quality of CNC’s clusterings, we use two commonly used performance metrics in graph partitioning: 1) Edge cut (Cut): the ratio of the cut to the total number of edges, and 2) Balancedness (Bal): one minus the MSE between the number of nodes in every cluster and the number of nodes in an ideal balanced cluster (ng ). Both Cut and Bal are between 0 and 1. A lower Cut is better while a higher Bal is better. The CNC loss function in Equation 6 only considers optimizing the expected normalized cut. We add a regularizer to improve balancedness between clusters." }, { "heading": "7.3 EXPERIMENTAL RESULTS", "text": "To show that CNC generalizes effectively on unseen graphs, we train CNC on a single TensorFlow graph, VGG, and validate on MNIST-conv. During inference, we test the trained model on unseen TensorFlow graphs: AlexNet, ResNet, and Inception-v3. We consider the best quality result among hMETIS, KaFFPa, and KaFFPaE as the ground truth. The ground truth for AlexNet is Bal = 99%, Cut = 4.6%, for Inception-v3, is Bal = 99%, Cut = 3.7%, and for ResNet is Bal = 99% and Cut = 3.3%.\nTable 4 shows the result of our experiments, and illustrates the importance of graph embeddings in generalization. The operation type (such as Add, Conv2d, and L2loss in TensorFlow) is used as the node feature as a one-hot. We leverage GCN (Kipf & Welling, 2017) and GraphSAGE (Hamilton et al., 2017) to capture similarities across graphs. In GraphSAGE-on both node embedding and clustering modules are trained jointly, while in GCN and GraphSAGE-off, only the clustering module is trained. Table 4 shows that the GraphSAGE-on (last row) achieves the best performance and generalizes better than the other models. Note that this model is trained on a single graph, VGG with only 1325 nodes, and is tested on AlexNet, ResNet, and Inception-v3 with 798, 20586, and 27114 nodes respectively. On the other hand, the ground truth is the result of running different partitioning algorithms on each graph individually. In this work, our goal is not to beat the existing graph partitioning algorithms which involve a lot of heuristics on a given graph. Our generalization results show promise that rather than using heuristics, CNC is able to learn graph structure for generalizable graph partitioning.\nModel Architecture and Hyper-parameters: The details of the model with the best performance (GraphSAGE-on) are as follows: the input feature dimension is 1518. GraphSAGE has 5 layers sized 512 with shared pooling, and the graph clustering module has 3 layers sized 64 with a final softmax layer. We use ReLU, Xavier initialization (Glorot & Bengio, 2010), and Adam with lr = 7.5e-5." } ]
2,019
OPTIMIZE EXPECTED NORMALIZED CUTS
SP:76a052062e3e4bb707b24a8809c220c8ac1df83a
[ "This paper considers the \"weight transport problem\" which is the problem of ensuring that the feedforward weights $W_{ij}$ is the same as the feedback weights $W_{ji}$ in the spiking NN model of computation. This paper proposes a novel learning method for the feedback weights which depends on accurately estimating the causal effect of any spiking neuron on the other neurons deeper in the network. Additionally, they show that this method also minimizes a natural cost function. They run many experiments on FashionMNIST and CIFAR-10 to validate this and also show that for deeper networks this approaches the accuracy levels of GD-based algorithms. ", "Strong paper in the direction of a more biologically plausible solution for the weight transport problem, where the forward and the backward weights need to be aligned. Earlier work for feedback alignment has included methods such as hard-coding sign symmetry. In this method, the authors show that a piece-wise linear model of the feedback as a function of the input given to a neuron can estimate the causal effect of a spike on downstream neurons. The authors propose a learning rule based on regression discontinuity design (RDD) and show that this leads to stronger alignment of weights (especially in earlier layers) compared to previous methods. The causal effect is measured directly from the discontinuity introduced while spiking - the difference between the outputs of the estimated piece-wise linear model at the point of discontinuity is used as the feedback." ]
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called “weight transport problem” for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
[ { "affiliations": [], "name": "Jordan Guerguiev" }, { "affiliations": [], "name": "Konrad P. Kording" }, { "affiliations": [], "name": "Blake A. Richards" } ]
[ { "authors": [ "Mohamed Akrout", "Collin Wilson", "Peter C Humphreys", "Timothy Lillicrap", "Douglas Tweed" ], "title": "Using weight mirrors to improve feedback alignment", "venue": null, "year": 1904 }, { "authors": [ "Joshua D Angrist", "Jörn-Steffen Pischke" ], "title": "Mostly harmless econometrics: An empiricist’s companion", "venue": "Princeton university press,", "year": 2008 }, { "authors": [ "Sergey Bartunov", "Adam Santoro", "Blake Richards", "Luke Marris", "Geoffrey E Hinton", "Timothy Lillicrap" ], "title": "Assessing the scalability of biologically-motivated deep learning algorithms and architectures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Francis Crick" ], "title": "The recent excitement about neural networks", "venue": null, "year": 1989 }, { "authors": [ "Stephen Grossberg" ], "title": "Competitive learning: From interactive activation to adaptive resonance", "venue": null, "year": 1987 }, { "authors": [ "John F Kolen", "Jordan B Pollack" ], "title": "Backpropagation without weight transport", "venue": "IEEE International Conference on Neural Networks (ICNN’94),", "year": 1994 }, { "authors": [ "Daniel Kunin", "Jonathan M Bloom", "Aleksandrina Goeva", "Cotton Seed" ], "title": "Loss landscapes of regularized linear autoencoders", "venue": "arXiv preprint arXiv:1901.08168,", "year": 2019 }, { "authors": [ "Benjamin James Lansdell", "Konrad Paul Kording" ], "title": "Spiking allows neurons to estimate their causal effect", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2015 }, { "authors": [ "Timothy P Lillicrap", "Daniel Cownden", "Douglas B Tweed", "Colin J Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature communications,", "year": 2016 }, { "authors": [ "Ioana E Marinescu", "Patrick N Lawlor", "Konrad P Kording" ], "title": "Quasi-experimental causality in neuroscience and behavioural research", "venue": "Nature human behaviour,", "year": 2018 }, { "authors": [ "Theodore H Moskovitz", "Ashok Litwin-Kumar", "LF Abbott" ], "title": "Feedback alignment in deep convolutional networks", "venue": "arXiv preprint arXiv:1812.06488,", "year": 2018 }, { "authors": [ "Dhruva Venkita Raman", "Adriana Perez Rotondo", "Timothy O’Leary" ], "title": "Fundamental bounds on learning performance in neural circuits", "venue": null, "year": 2019 }, { "authors": [ "Pieter R Roelfsema", "Arjen van Ooyen" ], "title": "Attention-gated reinforcement learning of internal representations for classification", "venue": "Neural computation,", "year": 2005 }, { "authors": [ "João Sacramento", "Rui Ponte Costa", "Yoshua Bengio", "Walter Senn" ], "title": "Dendritic cortical microcircuits approximate the backpropagation algorithm", "venue": null, "year": 2018 }, { "authors": [ "Benjamin Scellier", "Yoshua Bengio" ], "title": "Equilibrium propagation: Bridging the gap between energybased models and backpropagation", "venue": "ISSN 1662-5188", "year": 2017 }, { "authors": [ "Reza Shadmehr", "Maurice A Smith", "John W Krakauer" ], "title": "Error correction, sensory prediction, and adaptation in motor control", "venue": null, "year": 2010 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 } ]
[ { "heading": "1 INTRODUCTION", "text": "Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function. Given that people and animals can also show clear behavioral improvements on specific tasks (Shadmehr et al., 2010), however the brain determines its synaptic updates, on average, the changes in must also correlate with the gradients of some loss function related to the task (Raman et al., 2019). As such, the brain may have some way of calculating at least an estimator of gradients.\nTo-date, the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up, feedforward and top-down, feedback connections. The feedback connections are used for propagating activity that can be used to estimate a gradient (Williams, 1992; Lillicrap et al., 2016; Akrout et al., 2019; Roelfsema & Ooyen, 2005; Lee et al., 2015; Scellier & Bengio, 2017; Sacramento et al., 2018). In all such models, the gradient estimator is less biased the more the feedback connections mirror the feedforward weights. For example, in the REINFORCE algorithm (Williams, 1992), and related algorithms like AGREL (Roelfsema &\nOoyen, 2005), learning is optimal when the feedforward and feedback connections are perfectly symmetric, such that for any two neurons i and j the synaptic weight from i to j equals the weight from j to i, e.g. Wji = Wij (Figure 1). Some algorithms simply assume weight symmetry, such as Equilibrium Propagation (Scellier & Bengio, 2017). The requirement for synaptic weight symmetry is sometimes referred to as the “weight transport problem”, since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights, which is not biologically realistic (Crick, 1989-01-12; Grossberg, 1987). Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms (Lillicrap et al., 2016), and is thus an important topic of study.\nSeveral solutions to the weight transport problem have been proposed for biological models, including hard-wired sign symmetry (Moskovitz et al., 2018), random fixed feedback weights (Lillicrap et al., 2016), and learning to make the feedback weights symmetric (Lee et al., 2015; Sacramento et al., 2018; Akrout et al., 2019; Kolen & Pollack, 1994). Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry (Moskovitz et al., 2018) and it leads to less bias in the gradient estimator (and thereby, better training results) than using fixed random feedback weights (Bartunov et al., 2018; Akrout et al., 2019). However, of the current proposals for learning weight symmetry some do not actually work well in practice (Bartunov et al., 2018) and others still rely on some biologically unrealistic assumptions, including scalar value activation functions (as opposed to all-or-none spikes) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass Akrout et al. (2019); Sacramento et al. (2018).\nInterestingly, learning weight symmetry is implicitly a causal inference problem—the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners. As such, we may look to the causal infererence literature to develop better, more biologically realistic algorithms for learning weight symmetry. In econometrics, which focuses on quasi-experiments, researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question Angrist & Pischke (2008); Marinescu et al. (2018). Among such quasi-experimental methods, regression discontinuity design (RDD) is particularly promising. It uses the discontinuity introduced by a threshold to estimate causal effects. For example, RDD can be used to estimate the causal impact of getting into a particular school (which is a discontinuous, all-or-none variable) on later earning power. RDD is also potentially promising for estimating causal impact in biological neural networks, because real neurons communicate with discontinuous, all-or-none spikes. Indeed, it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons Lansdell & Kording (2019). Given that learning weight symmetry is fundamentally a causal estimation problem, we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic, spiking neural networks.\nHere, we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks (Lansdell & Kording, 2019). Our algorithm takes advantage of a neuron’s spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons. Since this causal effect is proportional to the feedforward synaptic weight between the two neurons, by estimating it, feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights, thereby overcoming the weight transport problem. We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry (or the lack thereof), that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment (Akrout et al., 2019) and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights (Lillicrap et al., 2016). Altogether, these results demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking, and which could be used in future models of biologically plausible gradient estimation." }, { "heading": "2 RELATED WORK", "text": "Previous work has shown that even when feedback weights in a neural network are initialized randomly and remain fixed throughout training, the feedforward weights learn to partially align themselves to the feedback weights, an algorithm known as feedback alignment (Lillicrap et al., 2016).\nWhile feedback alignment is successful at matching the learning performance of true gradient descent in relatively shallow networks, it does not scale well to deeper networks and performs poorly on difficult computer vision tasks (Bartunov et al., 2018).\nThe gap in learning performance between feedback alignment and gradient descent can be overcome if feedback weights are continually updated to match the sign of the reciprocal feedforward weights (Moskovitz et al., 2018). Furthermore, learning the feedback weights in order to make them more symmetric to the feedforward weights has been shown to improve learning over feedback alignment (Akrout et al., 2019).\nTo understand the underlying dynamics of learning weight symmetry, Kunin et al. (2019) define the symmetric alignment cost function, RSA, as one possible cost function that, when minimized, leads to weight symmetry:\nRSA := ‖W − Y T ‖2F (1) = ‖W‖2F + ‖Y ‖2F − 2tr(WY )\nwhere W are feedforward weights and Y are feedback weights. The first two terms are simply weight regularization terms that can be minimized using techniques like weight decay. But, the third term is the critical one for ensuring weight alignment.\nIn this paper we present a biologically plausible method of minimizing the third term. This method is based on the work of Lansdell & Kording (2019), who demonstrated that neurons can estimate their causal effect on a global reward signal using the discontinuity introduced by spiking. This is accomplished using RDD, wherein a piecewise linear model is fit around a discontinuity, and the differences in the regression intercepts indicates the causal impact of the discontinuous variable. In Lansdell & Kording (2019), neurons learn a piece-wise linear model of a reward signal as a function of their input drive, and estimate the causal effect of spiking by looking at the discontinuity at the spike threshold. Here, we modify this technique to perform causal inference on the effect of spiking on downstream neurons, rather than a reward signal. We leverage this to develop a learning rule for feedback weights that induces weight symmetry and improves training." }, { "heading": "3 OUR CONTRIBUTIONS", "text": "The primary contributions of this paper are as follows:\n• We demonstrate that spiking neurons can accurately estimate the causal effect of their spiking on downstream neurons by using a piece-wise linear model of the feedback as a function of the input drive to the neuron.\n• We present a learning rule for feedback weights that uses the causal effect estimator to encourage weight symmetry. We show that when feedback weights update using this algorithm it minimizes the symmetric alignment cost function,RSA.\n• We demonstrate that this learning weight symmetry rule improves training and test accuracy over feedback alignment, approaching gradient-descent-level performance on Fashion-MNIST, SVHN, CIFAR-10 and VOC in deeper networks." }, { "heading": "4 METHODS", "text": "" }, { "heading": "4.1 GENERAL APPROACH", "text": "In this work, we utilize a spiking neural network model for aligning feedforward and feedback weights. However, due to the intense computational demands of spiking neural networks, we only use spikes for the RDD algorithm. We then use the feedback weights learned by the RDD algorithm for training a non-spiking convolutional neural network. We do this because the goal of our work here is to develop an algorithm for aligning feedback weights in spiking networks, not for training feedforward weights in spiking networks on other tasks. Hence, in the interest of computational expediency, we only used spiking neurons when learning to align the weights. Additional details on this procedure are given below." }, { "heading": "4.2 RDD FEEDBACK TRAINING PHASE", "text": "At the start of every training epoch of a convolutional neural network, we use an RDD feedback weight training phase, during which all fully-connected sets of feedback weights in the network are updated. To perform these updates, we simulate a separate network of leaky integrate-and-fire (LIF) neurons. LIF neurons incorporate key elements of real neurons such as voltages, spiking thresholds and refractory periods. Each epoch, we begin by training the feedback weights in the LIF network. These weights are then transferred to the convolutional network, which is used for training the feedforward weights. The new feedforward weights are then transferred to the LIF net, and another feedback training phase with the LIF net starts the next epoch (Figure 2A). During the feedback training phase, the LIF network undergoes a training phase lasting 90 s of simulated time (30 s per set of feedback weights) (Figure 2B). We find that the spiking network used for RDD feedback training and the convolutional neural network are very closely matched in the activity of the units (Figure S1), which gives us confidence that this approach of using a separate non-spiking network for training the feedforward weights is legitimate.\nDuring the feedback training phase, a small subset of neurons in the first layer receive driving input that causes them to spike, while other neurons in this layer receive no input (see Appendix A.2). The subset of neurons that receive driving input is randomly selected every 100 ms of simulated time. This continues for 30 s in simulated time, after which the same process occurs for the subsequent hidden layers in the network. This protocol enforces sparse, de-correlated firing patterns that improve the causal inference procedure of RDD." }, { "heading": "4.3 LIF DYNAMICS", "text": "During the RDD feedback training phase, each unit in the network is simulated as a leaky integrateand-fire neuron. Spiking inputs from the previous layer arrive at feedforward synapses, where they are convolved with a temporal exponential kernel to simulate post-synaptic spike responses p = [p1, p2, ..., pm] (see Appendix A.1). The neurons can also receive driving input p̃i, instead of synaptic inputs. The total feedforward input to neuron i is thus defined as:\nare used to update a piece-wise linear function of umaxi , and the causal effect βik is defined as the difference of the left and right limits of the function at the spiking threshold.\nIi := { ωp̃i if p̃i > 0∑m j=1Wijpj otherwise\n(2)\nwhere Wij is the feedforward weight to neuron i from neuron j in the previous layer, and ω is a hyperparameter. The voltage of the neuron, vi, evolves as:\ndvi dt = −gLvi + gD(Ii − vi) (3)\nwhere gL and gD are leak and dendritic conductance constants, respectively. The input drive to the neuron, ui, is similarly modeled:\ndui dt = −gLui + gD(Ii − ui) (4)\nIf the voltage vi passes a spiking threshold θ, the neuron spikes and the voltage is reset to a value vreset = −1 (Figure 2C). Note that the input drive does not reset. This helps us to perform regressions both above and below the spike threshold.\nIn addition to feedforward inputs, spiking inputs from the downstream layer arrive at feedback synapses, where they create post-synaptic spike responses q = [q1, q2, ..., qn]. These responses are used in the causal effect estimation (see below)." }, { "heading": "4.4 RDD ALGORITHM", "text": "Whenever the voltage approaches the threshold θ (ie. |vi − θ| < α where α is a constant), an RDD window is initiated, lasting T = 30 ms in simulated time (Figure 2C). At the end of this time window, at each feedback synapse, the maximum input drive during the RDD window, umaxi , and the average change in feedback from downstream neuron k during the RDD window, ∆qavgk , are recorded. ∆qavgk is defined as the difference between the average feedback received during the RDD window, qavgk , and the feedback at the start of the RDD window, q pre k :\n∆qavgk := q avg k − q pre k (5)\nImportantly, umaxi provides a measure of how strongly neuron iwas driven by its inputs (and whether or not it passed the spiking threshold θ), while ∆qavgk is a measure of how the input received as feedback from neuron k changed after neuron i was driven close to its spiking threshold. These two values are then used to fit a piece-wise linear model of ∆qavgk as a function of u max i (Figure 2D). This piece-wise linear model is defined as:\nfik(x) :=\n{ c1ikx+ c 2 ik if x < θ\nc3ikx+ c 4 ik if x ≥ θ\n(6)\nThe parameters c1ik, c 2 ik, c 3 ik and c 4 ik are updated to perform linear regression using gradient descent:\nL = 1\n2 ‖fik(umaxi )−∆q avg k ‖ 2 (7)\n∆clik ∝ − ∂L\n∂clik for l ∈ {1, 2, 3, 4} (8)\nAn estimate of the causal effect of neuron i spiking on the activity of neuron k, βik, is then defined as the difference in the two sides of the piece-wise linear function at the spiking threshold:\nβik := lim x→θ+ fik(x)− lim x→θ− fik(x) (9)\nFinally, the weight at the feedback synapse, Yik, is updated to be a scaled version of βik:\nYik = βik γ\nσ2β (10)\nwhere γ is a hyperparameter and σ2β is the standard deviation of β values for all feedback synapses in the layer. This ensures that the scale of the full set of feedback weights between two layers in the network remains stable during training." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 ALIGNMENT OF FEEDBACK AND FEEDFORWARD WEIGHTS", "text": "To measure how well the causal effect estimate at each feedback synapse, βik, and thus the feedback weight Yik, reflects the reciprocal feedforward weight Wki, we can measure the percentage of feedback weights that have the same sign as the reciprocal feedforward weights (Figure 3A). When training on CIFAR-10 with no RDD feedback training phase (ie. feedback weights remain fixed throughout training), the feedback alignment effect somewhat increases the sign alignment during training, but it is ineffective at aligning the signs of weights in earlier layers in the network. Compared to feedback alignment, the addition of an RDD feedback training phase greatly increases the sign aligmnent between feedback and feedforward weights for all layers in the network, especially at earlier layers. In addition, the RDD algorithm increases sign alignment throughout the hierarchy more than the current state-of-the-art algorithm for weight alignment introduced recently by Akrout et al. Akrout et al. (2019) (Figure 3A). Furthermore, RDD feedback training changes feedback weights to not only match the sign but also the magnitude of the reciprocal feedforward weights (Figure 3B), which makes it better for weight alignment than hard-wired sign symmetry (Moskovitz et al., 2018)." }, { "heading": "5.2 DESCENDING THE SYMMETRIC ALIGNMENT COST FUNCTION", "text": "The symmetric alignment cost function (Kunin et al., 2019) (Equation 1) can be broken down as:\nRSA = Rdecay +Rself (11)\nwhere we defineRdecay andRself as:\nRdecay := ‖W‖2F + ‖Y ‖2F (12) Rself := −2tr(WY ) (13)\nRdecay is simply a weight regularization term that can be minimized using techniques like weight decay. Rself, in contrast, measures how well aligned in direction the two matrices are. Our learning rule for feedback weights minimizes the Rself term for weights throughout the network (Figure 4). By comparison, feedback alignment decreases Rself to a smaller extent, and its ability to do so diminishes at earlier layers in the network. This helps to explain why our algorithm induces weight alignment, and can improve training performance (see below)." }, { "heading": "5.3 PERFORMANCE ON FASHION-MNIST, SVHN, CIFAR-10 AND VOC", "text": "We trained the same network architecture (see Appendix A.3) on the Fashion-MNIST, SVHN, CIFAR-10 and VOC datasets using standard autograd techniques (backprop), feedback alignment and our RDD feedback training phase. RDD feedback training substantially improved the network’s performance over feedback alignment, and led to backprop-level accuracy on the train and test sets (Figure 5)." }, { "heading": "6 DISCUSSION", "text": "In order to understand how the brain learns complex tasks that require coordinated plasticity across many layers of synaptic connections, it is important to consider the weight transport problem. Here, we presented an algorithm for updating feedback weights in a network of spiking neurons that takes advantage of the spiking discontinuity to estimate the causal effect between two neurons (Figure 2). We showed that this algorithm enforces weight alignment (Figure 3), and identified a loss function, Rself, that is minimized by our algorithm (Figure 4). Finally, we demonstrated that our algorithm\nallows deep neural networks to achieve better learning performance than feedback alignment on Fashion-MNIST and CIFAR-10 (Figure 5). These results demonstrate the potential power of RDD as a means for solving the weight transport problem in biologically plausible deep learning models.\nOne aspect of our algorithm that is still biologically implausible is that it does not adhere to Dale’s principle, which states that a neuron performs the same action on all of its target cells (Strata & Harvey). This means that a neuron’s outgoing connections cannot include both positive and negative weights. However, even under this constraint, a neuron can have an excitatory effect on one downstream target and an inhibitory effect on another, by activating intermediary inhibitory interneurons. Because our algorithm provides a causal estimate of one neuron’s impact on another, theoretically, it could capture such polysynaptic effects. Therefore, this algorithm is in theory compatible with Dale’s principle. Future work should test the effects of this algorithm when implemented in a network of neurons that are explicitly excitatory or inhibitory." }, { "heading": "A APPENDIX", "text": "A.1 LIF NEURON SIMULATION DETAILS\nPost-synaptic spike responses at feedforward synapses, p, were calculated from pre-synaptic binary spikes using an exponential kernel function κ:\npj(t) = ∑ k κ(t− t̃jk) (14)\nwhere t̃jk is the kth spike time of input neuron j and κ is given by:\nκ(t) = (e−t/τL − e−t/τS )Θ(t)/(τL − τs) (15)\nwhere τs = 0.003 s and τL = 0.01 s represent short and long time constants, and Θ is the Heaviside step function. Post-synaptic spike responses at feedback synapses, q, were computed in the same way.\nA.2 RDD FEEDBACK TRAINING IMPLEMENTATION\nA.2.1 WEIGHT SCALING\nWeights were shared between the convolutional network and the network of LIF neurons, but feedforward weights in the LIF network were scaled versions of the convolutional network weights:\nW LIFij = ψmW Conv ij /σ 2 W Conv (16)\nwhere WConv is a feedforward weight matrix in the convolutional network, W LIF is the corresponding weight matrix in the LIF network, m is the number of units in the upstream layer (ie. the number of columns in WConv), σ2W Conv is the standard deviation of W\nConv and ψ is a hyperparameter. This rescaling ensures that spike rates in the LIF network stay within an optimal range for the RDD algorithm to converge quickly, even if the scale of the feedforward weights in the convolutional network changes during training. This avoids situations where the scale of feedforward weights is so small that little or no spiking occurs in the LIF neurons.\nA.2.2 FEEDBACK TRAINING PARADIGM\nThe RDD feedback training paradigm is implemented as follows. We start by providing driving input to the first layer in the network of LIF neurons. To create this driving input, we choose a subset of 20% of the neurons in that layer, and create a unique input spike train for each of these neurons using a Poisson process with a rate of 200 Hz. All other neurons in the layer receive no driving input. Every 100 ms, a new set of neurons to receive driving input is randomly chosen. After 30 s, this layer stops receiving driving input, and the process repeats for the next layer in the network.\nA.3 NETWORK AND TRAINING DETAILS\nThe network architectures used to train on Fashion-MNIST and CIFAR-10 are described in Table 1.\nInputs were randomly cropped and flipped during training, and batch normalization was used at each layer. Networks were trained using a minibatch size of 32.\nA.4 AKROUT ET AL. (2019) ALGORITHM IMPLEMENTATION\nIn experiments that compared sign alignment using our RDD algorithm with the Akrout et al. (2019) algorithm, we kept the same RDD feedback training paradigm (ie. layers were sequentially driven, and a small subset of neurons in each layer was active at once). However, rather than updating feedback weights using RDD, we recorded the mean firing rates of the active neurons in the upstream\nlayer, rl, and the mean firing rates in the downstream layer, rl+1. We then used the following feedback weight update rule:\n∆Y = ηrlr(l+1) T − λWDY (17)\nwhere Y are the feedback weights between layers l + 1 and l, and η and λWD are learning rate and weight decay hyperparameters, respectively.\nFigure S1: Comparison of average spike rates in the fully-connected layers of the LIF network vs. activities of the same layers in the convolutional network, when both sets of layers were fed the same input. Spike rates in the LIF network are largely correlated with activities of units in the convolutional network." } ]
2,020
SPIKE-BASED CAUSAL INFERENCE FOR WEIGHT ALIGNMENT
SP:941824acd2bae699174e6bed954e2938eb4bede1
[ "This paper presents a voice conversion approach using GANs based on adaptive instance normalization (AdaIN). The authors give the mathematical formulation of the problem and provide the implementation of the so-called AdaGAN. Experiments are carried out on VCTK and the proposed AdaGAN is compared with StarGAN. The idea is ok and the concept of using AdaIN for efficient voice conversion is also good. But the paper has a lot of issues both technically and grammatically, which makes the paper hard to follow.", "This work describes an efficient voice conversion system that can operate on non-parallel samples and convert from and to multiple voices. The central element of the methodology is the AdaIn modification. This is an efficient speaker adaptive technique where features are re-normalized to a particular speaker's domain. The rest of the machinery is well motivated and well executed, but less novel. This addition enables the voice conversion between speakers." ]
Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. The earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, including zero-shot learning, remains less explored areas in VC. Most of the many-to-many VC architectures require training data from all the target speakers for whom we want to convert the voices. In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in training (i.e., case of zero-shot learning). In particular, we propose Adaptive Generative Adversarial Network (AdaGAN), new architectural training procedure that helps in learning normalized speaker-independent latent representation, which will be used to generate speech with different speaking styles in the context of VC. We compare our results with the state-of-the-art StarGAN-VC architecture. In particular, the AdaGAN achieves 31.73%, and 10.37% relative improvement compared to the StarGAN in MOS tests for speech quality and speaker similarity, respectively. The key strength of the proposed architectures is that it yields these results with less computational complexity. AdaGAN is 88.6% less complex than StarGAN-VC in terms of FLoating Operation Per Second (FLOPS), and 85.46% less complex in terms of trainable parameters.
[]
[ { "authors": [ "Sercan Arik", "Jitong Chen", "Kainan Peng", "Wei Ping", "Yanqi Zhou" ], "title": "Neural voice cloning with a few samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chad Atalla", "Bartholomew Tam", "Amanda Song", "Gary Cottrell" ], "title": "Look ma, no gans! image transformation with modifae", "venue": null, "year": 2018 }, { "authors": [ "Fadi Biadsy", "Ron J Weiss", "Pedro J Moreno", "Dimitri Kanvesky", "Ye Jia" ], "title": "Parrotron: An end-to-end speech-to-speech conversion model and its applications to hearing-impaired speech and speech separation", "venue": null, "year": 1904 }, { "authors": [ "Merlijn Blaauw", "Jordi Bonada" ], "title": "Modeling and transforming speech using variational autoencoders", "venue": "In INTERSPEECH, pp. 1770–1774,", "year": 2016 }, { "authors": [ "Ling-Hui Chen", "Zhen-Hua Ling", "Li-Juan Liu", "Li-Rong Dai" ], "title": "Voice conversion using deep neural networks with layer-wise generative training", "venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP),", "year": 2014 }, { "authors": [ "Ju-chieh Chou", "Cheng-chieh Yeh", "Hung-yi Lee", "Lin-shan Lee" ], "title": "Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations", "venue": "arXiv preprint arXiv:1804.02812,", "year": 2018 }, { "authors": [ "Ju-chieh Chou", "Cheng-chieh Yeh", "Hung-yi Lee" ], "title": "One-shot voice conversion by separating speaker and content representations with instance normalization", "venue": null, "year": 1904 }, { "authors": [ "Anders Eriksson", "Pär Wretling" ], "title": "How flexible is the human voice?-a case study of mimicry", "venue": "In Fifth European Conference on Speech Communication and Technology,", "year": 1997 }, { "authors": [ "D. Erro", "I. Sainz", "E. Navas", "I. Hernáez" ], "title": "Improved HNM-based vocoder for statistical synthesizers", "venue": "In INTERSPEECH,", "year": 2011 }, { "authors": [ "Daniel Erro", "Asunción Moreno", "Antonio Bonafonte" ], "title": "INCA algorithm for training voice conversion systems from nonparallel corpora", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2010 }, { "authors": [ "D Gomathi", "Sathya Adithya Thati", "Karthik Venkat Sridaran", "Bayya Yegnanarayana" ], "title": "Analysis of mimicry speech", "venue": "In Thirteenth Annual Conference of the International Speech Communication Association,", "year": 2012 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Chin-Cheng Hsu", "Hsin-Te Hwang", "Yi-Chiao Wu", "Yu Tsao", "Hsin-Min Wang" ], "title": "Voice conversion from non-parallel corpora using variational auto-encoder", "venue": "In 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA),", "year": 2016 }, { "authors": [ "Xun Huang", "Serge Belongie" ], "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Hirokazu Kameoka", "Takuhiro Kaneko", "Kou Tanaka", "Nobukatsu Hojo" ], "title": "Stargan-vc: Nonparallel many-to-many voice conversion with star generative adversarial networks", "venue": "arXiv preprint arXiv:1806.02169,", "year": 2018 }, { "authors": [ "Takuhiro Kaneko", "Hirokazu Kameoka" ], "title": "Parallel-data-free voice conversion using cycle-consistent adversarial networks", "venue": "arXiv preprint arXiv:1711.11293,", "year": 2017 }, { "authors": [ "Taeksoo Kim", "Moonsu Cha", "Hyunsoo Kim", "Jung Kwon Lee", "Jiwon Kim" ], "title": "Learning to discover cross-domain relations with generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Tomi Kinnunen", "Lauri Juvela", "Paavo Alku", "Junichi Yamagishi" ], "title": "Non-parallel voice conversion using i-vector PLDA: Towards unifying speaker verification and transformation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2017 }, { "authors": [ "Chung-Han Lee", "Chung-Hsien Wu" ], "title": "MAP-based adaptation for speech conversion using adaptation data selection and non-parallel training", "venue": "In 9 International Conference on Spoken Language Processing (ICSLP),", "year": 2006 }, { "authors": [ "Daniel Michelsanti", "Zheng-Hua Tan" ], "title": "Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification", "venue": "In INTERSPEECH,", "year": 2017 }, { "authors": [ "Athanasios Mouchtaris", "Jan Van der Spiegel", "Paul Mueller" ], "title": "Nonparallel training for voice conversion based on a parameter adaptation approach", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2006 }, { "authors": [ "Toru Nakashika", "Tetsuya Takiguchi", "Yasuo Ariki" ], "title": "Voice conversion based on speaker-dependent restricted boltzmann machines", "venue": "IEICE TRANSACTIONS on Information and Systems,", "year": 2014 }, { "authors": [ "Kaizhi Qian", "Yang Zhang", "Shiyu Chang", "Xuesong Yang", "Mark Hasegawa-Johnson" ], "title": "Autovc: Zero-shot voice style transfer with only autoencoder loss", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Yuki Saito", "Yusuke Ijima", "Kyosuke Nishida", "Shinnosuke Takamichi" ], "title": "Non-parallel voice conversion using variational autoencoders conditioned by phonetic posteriorgrams and d-vectors", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2018 }, { "authors": [ "Yuki Saito", "Shinnosuke Takamichi", "Hiroshi Saruwatari" ], "title": "Statistical parametric speech synthesis incorporating generative adversarial networks", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2018 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Nirmesh Shah", "Mihir Parmar", "Neil Shah", "Hemant A. Patil" ], "title": "Novel MMSE DiscoGAN for crossdomain whisper-to-speech conversion", "venue": "In Machine Learning in Speech and Language Processing (MLSLP) Workshop,", "year": 2018 }, { "authors": [ "Nirmesh J. Shah", "Hemant A. Patil" ], "title": "Effectiveness of dynamic features in inca and temporal context-inca", "venue": "In INTERSPEECH,", "year": 2018 }, { "authors": [ "Nirmesh J. Shah", "Maulik Madhavi", "Hemant A. Patil" ], "title": "Unsupervised vocal tract length warped posterior features for non-parallel voice conversion", "venue": "In INTERSPEECH,", "year": 2018 }, { "authors": [ "Nirmesh J. Shah", "Sreeraj R", "Neil Shah", "Hemant A. Patil" ], "title": "Novel inter mixture weighted GMM posteriorgram for DNN and GAN-based voice conversion", "venue": "In APSIPA,", "year": 2018 }, { "authors": [ "Y. Stylianou" ], "title": "Voice transformation: a survey", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2009 }, { "authors": [ "Yannis Stylianou", "Olivier Cappé", "Eric Moulines" ], "title": "Continuous probabilistic transform for voice conversion", "venue": "IEEE Transactions on Speech and Audio Processing,", "year": 1998 }, { "authors": [ "Lifa Sun", "Shiyin Kang", "Kun Li", "Helen Meng" ], "title": "Voice conversion using deep bidirectional long short-term memory based recurrent neural networks", "venue": "In 2015 IEEE ICASSP,", "year": 2015 }, { "authors": [ "Tomoki Toda", "Alan W. Black", "Keiichi Tokuda" ], "title": "Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2007 }, { "authors": [ "Christophe Veaux", "Junichi Yamagishi", "Kirsten MacDonald" ], "title": "Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit", "venue": "University of Edinburgh. The Centre for Speech Technology Research (CSTR),", "year": 2017 }, { "authors": [ "Feng-Long Xie", "Frank K Soong", "Haifeng Li" ], "title": "A KL divergence and DNN-based approach to voice conversion without parallel training sentences", "venue": "In INSTERSPEECH,", "year": 2016 }, { "authors": [ "Hui Ye", "Steve Young" ], "title": "Quality-enhanced voice morphing using maximum likelihood transformations", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2006 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Language is the core of civilization, and speech is the most powerful and natural form of communication. Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism (Eriksson & Wretling (1997)) and challenging concepts of prosodic transfer (Gomathi et al. (2012)). In the literature, this is achieved using Voice Conversion (VC) technique (Stylianou (2009)). Recently, VC has gained more attention due to its fascinating real-world applications in privacy and identity protection, military operations, generating new voices for animated and fictional movies, voice repair in medical-domain, voice assistants, etc. Voice Conversion (VC) technique converts source speaker’s voice in such a way as if it were spoken by the target speaker. This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal (Stylianou et al. (1998)). In addition, Voice cloning is one of the closely related task to VC (Arik et al. (2018)). However, in this research work we only focus to advance the Voice Conversion.\nWith the emergence of deep learning techniques, VC has become more efficient. Deep learningbased techniques have made remarkable progress in parallel VC. However, it is difficult to get parallel data, and such data needs alignment (which is a arduous process) to get better results. Building a VC system from non-parallel data is highly challenging, at the same time valuable for practical application scenarios. Recently, many deep learning-based style transfer algorithms have been applied for non-parallel VC task. Hence, this problem can be formulated as a style transfer problem, where one speaker’s style is converted into another while preserving the linguistic content as it is. In particular, Conditional Variational AutoEncoders (CVAEs), Generative Adversarial Networks (GANs) (proposed by Goodfellow et al. (2014)), and its variants have gained significant attention in non-parallel VC. However, it is known that the training task for GAN is hard, and the convergence property of GAN is fragile (Salimans et al. (2016)). There is no substantial evidence that the gen-\nerated speech is perceptually good. Moreover, CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features.\nAlthough, there are few GAN-based systems that produced state-of-the-art results for non-parallel VC. Among these algorithms, even fewer can be applied for many-to-many VC tasks. At last, there is the only system available for zero-shot VC proposed by Qian et al. (2019). Zero-shot conversion is a technique to convert source speaker’s voice into an unseen target speaker’s speaker via looking at a few utterances of that speaker. As known, solutions to a challenging problem comes with trade-offs. Despite the results, architectures have become more complex, which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters (Goodfellow et al. (2016)).\nMotivated by this, we propose computationally less expensive Adaptive GAN (AdaGAN), a new style transfer framework, and a new architectural training procedure that we apply to the GAN-based framework. In AdaGAN, the generator encapsulates Adaptive Instance Normalization (AdaIN) for style transfer, and the discriminator is responsible for adversarial training. Recently, StarGAN-VC (proposed by Kameoka et al. (2018)) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC. AdaGAN is also GAN-based framework. Therefore, we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness, speaker similarity, and computational complexity. We observe that AdaGAN yields state-of-the-art results for this with almost 88.6% less computational complexity. Recently proposed AutoVC (by Qian et al. (2019)) is the only framework for zero-shot VC. Inspired by this, we propose AdaGAN for zero-shot VC as an independent study, which is the first GAN-based framework to perform zeroshot VC. We reported initial results for zero-shot VC using AdaGAN.The main contributions of this work are as follows:\n• We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature.\n• We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker.\n• Although AdaGAN has much lesser computation complexity, AdaGAN shows much better results in terms of naturalness and speaker similarity compared to the baseline." }, { "heading": "2 RELATED WORK", "text": "Developing a non-parallel VC framework is challenging task because of the problems associated with the training conditions using non-parallel data in deep learning architectures. However, attempts have been made to develop many non-parallel VC frameworks in the past decade. For example, Maximum Likelihood (ML)-based approach proposed by Ye & Young (2006), speaker adaptation technique by Mouchtaris et al. (2006), GMM-based VC method using Maximum a posteriori (MAP) adaptation technique by Lee & Wu (2006), iterative alignment method by Erro et al. (2010), Automatic Speech Recognition (ASR)-based method by Xie et al. (2016), speaker verification-based method using i-vectors by Kinnunen et al. (2017), and many other frameworks (Chen et al. (2014); Nakashika et al. (2014); Blaauw & Bonada (2016); Hsu et al. (2016); Kaneko & Kameoka (2017); Saito et al. (2018a); Sun et al. (2015); Shah et al. (2018b;c); Shah & Patil (2018); Biadsy et al. (2019)). Recently, a method using Conditional Variational Autoencoders (CVAEs) (Kingma & Welling (2013)) was proposed for non-parallel VC by (Hsu et al. (2016); Saito et al. (2018a)). Recently, VAE based method for VC was proposed, which also uses AdaIN to transfer the speaking style (Chou et al. (2019)). One powerful framework that can potentially overcome the weakness of VAEs involves GANs. While GAN-based methods were originally applied for image translation problems, these methods have also been employed with noteworthy success for various speech technology-related applications, we can see via architectures proposed by (Michelsanti & Tan (2017); Saito et al. (2018b); Shah et al. (2018a)), and many others. In GANs-based methods, Cycle-consistent Adversarial Network (CycleGAN)-VC is one of the state-of-the-art methods in the non-parallel VC task proposed by (Kaneko & Kameoka (2017)).\nAmong these non-parallel algorithms, a few can produce good results for non-parallel many-tomany VC. Recently, StarGAN-VC (Kameoka et al. (2018)) is a state-of-the-art method for the nonparallel many-to-many VC among all the GAN-based frameworks. Past attempts have been made\nto achieve conversion using style transfer algorithms (Atalla et al. (2018); Chou et al. (2018); Qian et al. (2019)). The most recent framework is the AutoVC (proposed by Qian et al. (2019)) using style transfer scheme, the first and the only framework in VC literature which achieved state-of-the-art results in zero-shot VC." }, { "heading": "3 APPROACH", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "The traditional VC problem is being reformulated as a style transfer problem. Here, we assume Z is a set of n speakers denoted by Z = {Z1, Z2, ..., Zn}, where Zi is the ith speaker, and U is the set of m speech utterances denoted by U = {U1, U2, ..., Um}, where Ui is the ith speech utterance. Now, probability density function (pdf ) is generated for given Zi, and Ui denoted by pX(.|Zi, Ui) via the stochastic process of random sampling from the distributions Zi and Ui. Here, Xi ∼ pX(.|Zi, Ui) can be referred as features of given Ui with speaking style of Zi.\nThe key idea is to transfer the speaking style of one speaker into another in order to achieve VC. For this, let us consider a set of random variables (Z1, U1) corresponding to a source speaker, and (Z2, U2) corresponding to a target speaker. Here, U1 and U2 are spoken by Z1 and Z2, respectively. Our goal is to achieve pX̂(.|Z2, U1). Now, we want to learn a mapping function to achieve our goal for VC. Our mapping function is able to generate the distribution denoted by X̂Z1→Z2 with speaking style of Z2 while retaining the linguistic content of U1. Formally, we want to generate the pdf (i.e., pX̂Z1→Z2 (.|Z1, U1, Z2, U2)) to be close or equal to the pX̂(.|Z2, U1). Accurately, our mapping function will achieve this property, as shown in eq. 1.\npX̂Z1→Z2 (.|Z1, U1, Z2, U2) = pX̂(.|Z2, U1). (1)\nIntuitively, we want to transfer the speaking style of Z2 to the Z1 while preserving the linguistic content of U1. Therefore, converted voice is perceptually sound as if utterance U1 were spoken by Z2. With this, AdaGAN is also designed to achieve zero-shot VC. During zero-shot conversion, U1 and U2 can be seen or unseen utterances, and Z1 and Z2 can be seen or unseen speakers.\n3.2 ADAPTIVE INSTANCE NORMALIZATION (AdaIN )\nOur key idea for style transfer in VC revolves around the AdaIN . First, AdaIN was introduced for arbitrary style transfer in image-to-image translation tasks by Huang & Belongie (2017). In this paper, AdaIN helps us to capture the speaking style and linguistic content into a single feature representation. AdaIN takes features of a source speaker’s speech (i.e., X) and sample features of the target speaker’s speech (i.e., Y ). Here, x is a feature from the set X related to the linguistic content of source speech, and Y is features related to the speaking style of the target speaker. AdaIN will map the mean and standard deviation of X (i.e., µX and σx) in such a way that it will match with mean, and standard deviation of Y (i.e., µY and σY ). Mathematical equation of AdaIN is defined as (Huang & Belongie (2017)):\nAdaIN(x, Y ) = σY (x− µX σX ) + µY . (2)\nFrom eq. (2), we can infer that AdaIN first normalizes x, and scales back based on mean and standard deviations of y. Intuitively, let’s assume that we have one latent space which represents the linguistic content in the distribution and also contains speaking style in terms of the mean and standard deviation of the same distribution. To transfer the speaking style, we have adopted the distribution properties (i.e., its mean and standard deviation) of the target speaker. As a result, the output produced by AdaIN has the high average activation for the features which are responsible for style (y) while preserving linguistic content. AdaIN does not have any learning parameters. Hence, it will not affect the computational complexity of the framework." }, { "heading": "4 PROPOSED ADAGAN FRAMEWORK", "text": "In this Section, we discuss our proposed AdaGAN architecture in detail. we show that AdaIN helps the generator make speaking style transfer easy and efficient, and can achieve zero-shot VC. We present an intuitive and theoretical analysis for the proposed framework.\nThe AdaGAN framework consists of an encoderEn(.), a decoderDe(.), and a discriminatorDis(.). Here, En(.) encodes the input features of speech to the latent space, De(.) generates the features of speech from the given latent space, and Dis(.) ensures adversarial training. The style transfer scheme and training procedure are shown in Fig. 1." }, { "heading": "4.1 PROPOSED STYLE TRANSFER SCHEME", "text": "Features of source speaker’s speech (i.e., x), and any sample features of target speaker’s speech (i.e., y), is taken as input to En(.) to get the required latent space representations Sx and Sy as given in eq. 3. Now, AdaIN is used to transfer distribution properties (i.e., its mean and standard deviation) of Sy to Sx, and generate the single feature representation denoted by t as per eq. 3. In the next step, we have used De(.) to generate features of speech (i.e., xZ1→Z2 ) from t. This entire process is illustrated via Fig. 1(a). This generated features xZ1→Z2 contains the speaking style of target speaker via retaining the linguistic content of source speaker speech. We have encapsulated this style transfer algorithm into the generator of AdaGAN in order to improve the quality of xZ1→Z2 via adversarial training.\nSx = En(x), Sy = En(y), t = AdaIN(Sx, Sy), xZ1→Z2 = De(t). (3)" }, { "heading": "4.2 TRAINING AND TESTING METHODOLOGY", "text": "We have applied a new training methodology in GAN-based framework. We have designed a training procedure based on non-parallel data in order to learn the mapping function for many-to-many as well as zero-shot VC. We know that the idea of transitivity as a way to regularize structured data has a long history. People have extended this concept into the training methodologies of deep learning architectures (Zhu et al. (2017); Kim et al. (2017)). In this paper, we have encapsulated the idea of transitivity via introducing the reconstruction loss along with adversarial training. The entire training procedure is illustrated in Fig. 1(b).\nFirst, we randomly select the two speakers Z1 and Z2. Formally, we have two sets of random variables, (Z1, U1, X) and (Z2, U2, Y ) corresponding to the source and target speaker, respectively. After this, we randomly select x1, x2 ∈ pX(.|Z1, U1), and y1, y2 ∈ pY (.|Z2, U2). During the training, VC is done from the source speaker (Z1) to target speaker (Z2) via style transfer scheme illustrated in Fig. 1(a). Using x1, y1, we transfer speaking style of speaker Z2 to Z1. From eq. (3), we can describe this procedure as shown in eq. (4).\nSx1 = En(x1), Sy1 = En(y1), t1 = AdaIN(Sx1 , Sy1), xZ1→Z2 = De(t1). (4)\nNow, using another sample of source speech (i.e., x2), we have reconstructed the source speech features (i.e., xZ1→Z2→Z1 ) from the features of converted speech (xZ1→Z2 ) in order to achieve better conversion efficiency. This procedure is described in eq. (5).\nSx2 = En(x2), SxZ1→Z2 = En(xZ1→Z2), t2 = AdaIN(SxZ1→Z2 , Sx2), xZ1→Z2→Z1 = De(t2).\n(5)\nNow, the same cycle process is again applied to transfer the speaking style of Z2 to Z1, we get following equations:\nSy1 = En(y1), Sx1 = En(x1), t ′ 1 = AdaIN(Sy1 , Sx1), yZ2→Z1 = De(t ′ 1), (6)\nSy2 = En(y2), SyZ2→Z1 = E(yZ2→Z1), t ′ 2 = AdaIN(SyZ2→Z1 , Sy2), yZ2→Z1→Z2 = De(t ′ 2).\n(7)\nDuring testing, we gave features of the source speaker’s speech along with the sample features of target speaker to the encoder. AdaGAN requires 3 s to 5 s of sample speech features of the target speaker in order to transfer speaking style of target speaker to source speaker. This sample speech will be used to estimate the mean and standard deviation of the target speaker’s distribution in its respective latent space. After this, the speaking style will be transferred in latent space of source speaker usingAdaIN . Next, the decoder will generate speech back from the converted latent representation of the source speaker. Briefly, the decoder will generate the speech with speaking style of the target speaker. Now, the generator of AdaGAN is consist of Encoder and Decoder. Hence, we can say that the generator of AdaGAN will generate the speech with speaking style of the target speaker for a given source speaker’s speech along with the sample of target speaker during testing. The training procedure of AdaGAN is formally presented in Algorithm 1.\nAlgorithm 1 Algorithm for training of AdaGAN Input: Weights of Encoder, Decoder, and Discriminator Output: Optimized weights\n1: for number of training iterations do 2: randomly select two speakers (Z1 and Z2) 3: sample 4 minibatches of cepstral features {x1, x2} ∈ pX(.|Z1, U1), and {y1, y2} ∈ pY (.|Z2, U2). 4: 5: /* Comment starts: 6: First column shows the process of transferring speaking style of speaker Z2 to Z1. 7: Second column shows the process of transferring speaking style of speaker Z1 to Z2. 8: Comment ends */ 9:\n10: SX1 ← En(x1); SY1 ← En(y1); 11: t1 ← AdaIN(SX1 , SY1); t′1 ← AdaIN(SY1 , SX1); 12: x′ ← De(t1); y′ ← De(t′1); 13: SX2 ← En(x2); SY2 ← En(y2); 14: SX′ ← En(x′); SY ′ ← En(y′); 15: t2 ← AdaIN(SX′ , SY2); t′2 ← AdaIN(SY ′ , SX2); 16: 17: Update the generator by descending its stochastic gradient:\nθEn,De +← δθEn,De(Ladv +λ1Lcyc+λ2LCX→Y +λ3LCY →X +λ4LstyX→Y +λ5LstyY →X )\n18: 19: Update the discriminator by descending its stochastic gradient:\nθDis +← δθDis(Ladv)\n20: 21: end for 22: return" }, { "heading": "4.3 LOSS FUNCTIONS", "text": "To achieve many-to-many and zero-shot VC, AdaGAN uses four different loss functions: Adversarial loss, reconstruction loss, content preserve loss, and style transfer loss. Adversarial loss: This loss measures how distinguishable the converted data is from the normal speech data. The smaller the loss is, the converted data distribution is more closer to normal speech distribution. Hence, we want to minimize objective function given in eq. (9) against an adversary Dis(.) that tries to maximize it. Here, this loss is used to make the generated or converted speech\nindistinguishable from the original speech, and can be mathematically formulated as:\nLadv(En,De) = (Dis(yZ2→Z1)− 1)2 + (Dis(xZ1→Z2)− 1)2, (8)\nLadv(Dis) = (Dis(x1)− 1)2 + (Dis(y1)− 1)2. (9)\nReconstruction Loss: By using only adversarial loss, we may loose linguistic information in the converted voice. This loss helps the encoder and decoder to retain the linguistic information in converted voice. We have used L1 norm as a reconstruction loss, and can be described as:\nLcyc = ‖xZ1→Z2→Z1 − x1‖1 + ‖yZ2→Z1→Z2 − y1‖1. (10)\nContent Preserve Loss: To preserve the linguistic content of the input speech during AdaIN. This loss also ensure that our encoder and decoder are noise free. We have used following L1 norm for this loss, i.e.,\nLCX→Y = ‖SxZ1→Z2 − t1‖1. (11)\nStyle transfer Loss: This loss function is at the heart of the AdaGAN. This loss plays a vital role in achieving many-to-many and zero-shot VC using AdaGAN. This loss helps AdaGAN to create a latent space with the speaking style features in terms of mean and standard deviation of the distribution while preserving the linguistic content in the same distribution. We have used L1 norm as style transfer loss, i.e.,\nLstyX→Y = ‖t2 − SX1‖1, (12)\nFinal Objective Function: The overall objective function of AdaGAN can be defined as:\nLtotal =Ladv(En,De) + Ladv(Dis) + λ1Lcyc + λ2LCX→Y + λ3LCY →X + λ4LstyX→Y + λ5LstyY →X ,\n(13)\nwhere λ1, λ2, λ3, λ4, and λ5 are the hyperparameters. These parameters controls the relative importance of each loss w.r.t. each other. We have used λ1 = 10, λ2 = 2, λ3 = 2, λ4 = 3, and λ5 = 3 during the experiments. We theoretically proved that how these simple loss functions are the key idea behind the performance of AdaGAN in the next Section. We optimized these loss functions according to the Algorithm 1." }, { "heading": "4.4 ARCHITECTURAL DETAILS", "text": "AdaGAN framework contains a Generator and a Discriminator. In this Section, we provide detailed information about each component of the AdaGAN framework.\nAs shown in Fig. 1, Generator of AdaGAN consists of mainly 2 modules: Encoder and Decoder. AdaGAN uses the same encoder to extract the features from the source and target speakers’ speech. Input of encoder is a vector of 40 Mel cepstral features, which it converts to a latent space of size 1x512. The decoder takes normalized feature vector of size 1x512 as input and converts it to 1x40 target speech features.\nIn encoder and decoder, all layers are fully-connected layers. In encoder, the input and output layer has 40 and 512 cell size, respectively. In decoder, input and output layer have 512 and 40 cell size, respectively. All the hidden layers in encoder and decoder consist 512 cell size. All the layers are followed by Rectified Linear Unit (ReLU) activation function except output layer.\nIn AdaGAN, main goal of the discriminator is similar to traditional GAN training. Accurately, it will discriminate whether the input is generated (xZ1→Z2 ) or from the original distribution. Same as Encoder and Decoder, structure of discriminator follows the stacked fully-connected layers. It consists of an input layer, 3 hidden layers and, an output layer with 40, 512, and 1 cell size, respectively. In discriminator, each layer followed by the ReLU activation function and output layer followed by a sigmoid activation function." }, { "heading": "5 ANALYSIS AND COMPARISON", "text": "In this Section, we show the theoretical correctness and intuitive explanation of AdaGAN. The key idea of the AdaGAN is to learn the latent space, where we can represent our features as per our requirements." }, { "heading": "5.1 THEORETICAL ANALYSIS", "text": "Consider the training procedure of AdaGAN described in Section 4.2. Let us take two latent space features Sx1 and Sx2 corresponding to two different sample features, x1 and x2, respectively, of the same speaker Z1. We are also going to take Sy1 from latent space of another speaker Z2, where y1 is a sample feature of that speaker, and Z1 6= Z2. After training of AdaGAN for a large number of iteration of τ , where theoretically τ →∞, let us assume the following:\n1. In the latent space, mean and standard deviation of the same speaker are constant irrespective of the linguistic content. Formally, we have µSx1 = µSx2 , and σSx1 = σSx2 .\n2. If we have different speakers, then mean and standard deviation of respective latent representations are different. Accurately, µSx1 6= µSy1 , and σSx1 6= σSy1 .\nTheorem 1: Given these assumptions, ∃ a latent space where normalized latent representation of input features will be the same irrespective of speaking style. Here, we take input features of same utterance U1. Hence,\nDKL( pIN (.|Z1, U1) ‖ pIN (.|Z2, U1) ) = 0, (14)\nwhere KL(·|·) is the KL-divergence, and pN (.|Zi, Ui) is pdf of normalized latent representation of input feature Ui, with speaking style of speaker Zi.\nThis is the fundamental theorem that lies behind the concept of AdaGAN. Intuitively, from this theorem, we can observe that the normalized latent representation of the same utterance spoken by different speakers is the same. This fact leads to the conclusion that linguistic content of speech is captured by the distribution of normalized latent space, and speaking style of a speaker is being captured by mean and standard deviation of the same distribution.\nTheorem 2: By optimization of minEn,De LCX→Y + LstyX→Y , the assumptions made in Theorem 1 can be satisfied.\nThe proof of both the theorems are given in Appendix A. Both the theorems conclude that AdaIN made style transfer easy and efficient via only using the mean and standard deviation of the distribution. In Appendix B, we provided the t-SNE visualization of the features in latent space to give the empirical proof.\n5.2 ADAGAN vs. STARGAN\nIn this Section, we show a comparison between AdaGAN and StarGAN-VC in terms of computational complexity. Table 1 and 2 provided the number of layers, FLoating point Operations Per Second (FLOPS), and trainable parameters1 for the AdaGAN and StarGAN-VC, respectively.\nIn Table 1, GAdaGAN and DAdaGAN are the generator and discriminator of AdaGAN, respectively. Parameters of the generator are calculated by adding the parameters of encoder and decoder. To calculate the FLOPS and parameters for StarGAN, we have used the open-source implementation of StarGAN-VC2. In the Table 2, GStarGAN , DStarGAN , and Cls are generator, discriminator, and classifier of StarGAN, respectively. All these three modules contain convolution layers. In StarGAN, there is weight sharing between the 5 convolution layers of discriminator and classifier. Here, we remove the FLOPS and trainable parameters of shared layers from the Cls. Hence, we consider it once in the calculation of total FLOPS and trainable parameters.\nWe can observe that AdaGAN is 88.6% less complex than StarGAN in terms of FLOPS, and 85.46% less complex in terms of trainable parameters. Moreover, StarGAN uses a one-hot encoding to get the information about the target speaker. However, AdaGAN requires any sample of 3 s - 5 s from the target speaker." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "In this Section, we will show experimental setup, and subjective evaluation (or results) of AdaGAN. Samples of converted audio files are provided here3." }, { "heading": "6.1 EXPERIMENTAL SETUP", "text": "The experiments are performed on the VCTK corpus (Veaux et al. (2017)), which contains 44 hours of data for 109 speakers. The statistics of the database are given Veaux et al. (2017). The database is designed to provide non-parallel data for VC. From this database, AdaGAN system was developed on data of 20 speakers (10 males and 10 females). Out of this, we have used 80% data for training and 20% data for testing for each speaker. Particularly, we have used 6.27 and 1.45 hours of data for the training and testing, respectively. The 40-dimensional (dim) Mel Cepstral Coefficients (MCCs) (including the 0th coefficient) and 1-dimensional F0 are extracted from the speech of source, and the target speakers with 25 ms window and 5 ms frame-shift. For analysis-synthesis, we have used AHOCODER (Erro et al. (2011)). Mean-variance transformation method has been applied for fundamental frequency F0 conversion Toda et al. (2007).\nTo evaluate AdaGAN empirically, we performed two subjective tests for evaluating naturalness and speaker similarity. In particular, Mean Opinion Score (MOS) test have been conducted, where subjects have been asked to rate the randomly played converted speech on 5-point scale for naturalness, where 1 means converted voice is very robotic, and 5 means converted voice is very natural. In the second test, subjects have been asked to rate how similar the converted voice given the reference target speech in terms of speaker similarity. Subjects rated converted voices for speaker similarity on the 5-point scale, where 1 means dissimilar with high confidence and 5 means similar with high confidence w.r.t. the given target speaker. Total 15 subjects (6 females and 9 males with no known hearing impairments with age varies between 18 to 31 years) took part in the subjective evaluations." }, { "heading": "6.2 RESULTS FOR MANY-TO-MANY CONVERSION", "text": "Randomly 2 male and 2 female speakers have been selected from the testing dataset for subjective evaluations. We evaluated four different conversion systems, i.e., male-male (M2M), female-female\n2https://github.com/liusongxiang/StarGAN-Voice-Conversion 3https://drive.google.com/open?id=1VzA2bRhUz1lZ4DBDOIKUO9wOM8xkcTj2\n(F2F), male-female (M2F), and female-male (F2M) developed using proposed AdaGAN and StarGAN. From each system, two converted audio files have been selected. Hence, 8 audio files from AdaGAN and another 8 audio files from the StarGAN have been taken for subjective evaluations. We kept the same source-target speaker-pairs for fair comparison.\nFig. 2 shows the comparison of MOS scores between AdaGAN and the baseline StarGAN-VC. Total of 15 subjects (6 females and 9 males) between 18-30 years of age and with no known hearing impairments took part in the subjective test. For statistically significant analysis, results are shown in different conversion possibilities with 95% confidence interval. In addition, for our subjective tests, we obtain p-value 0.013, which is much lesser then 0.05. Therefore, it clearly shows the statistical significance of the results. From Fig. 2, it is clear that there is 31.73 % relative improvement (on an average) in MOS score for the AdaGAN compared to the baseline StarGAN. In terms of speaker similarity, AdaGAN yields on an average 10.37% relative improvement in speaker similarity compare to baseline (as shown in Fig. 3). Although AdaGAN outperforms StarGAN, both the methods are not able to achieve good score in the similarity test. The main reason is due to the F0 conversion and errors in statistical vocoder (i.e., AHOCODER and WORLD-vocoder). However, neural network-based Wavenet-vocoder shows very promising results on speech synthesis. Although they are very accurate, they are data-driven approaches. In summary, AdaGAN achieves better performance in MOS tests compared to the StarGAN-VC for naturalness and speaker similarity." }, { "heading": "6.3 ZERO-SHOT LEARNING", "text": "In traditional many-to-many VC, all the target speakers are seen while training the architecture. Hence, traditional algorithms are not able to do VC for an unseen speaker (i.e., for the cases of zeroshot VC). Along with many-to-many VC, we extended our study of AdaGAN for zero-shot VC. Zero-shot conversion is the task of transferring the speaking style of seen/unseen source speaker to seen/unseen target speaker. In simple terms, conversion can be done between any speaker whether their data were present in the corpus or not at the time of training. StarGAN-VC uses a one-hot vector for target speaker reference during conversion. In the case of an unseen target speaker, it will not be able to perform the zero-shot conversion. However, AdaGAN maps the input to the required latent space (as proved in Appendix A). Therefore, AdaGAN will be able to learn more promised latent space for even unseen speakers. Here, we show our experimental results for the zero-shot VC task. We performed subjective tests in a similar manner as performed in many-to-many VC. We have used AdaGAN trained on 20 speakers (10 males and 10 females). Later on, we selected randomly 1 seen, and 1 unseen male speakers and 1 seen, and 1 unseen female speakers. And we applied the permutations on these different speakers to get all the different conversion samples, such as seen-seen (S2S), seen-unseen (S2U), unseen-seen (U2S), and unseen-unseen (U2U). Fig. 4, and Fig. 5 shows the MOS scores for naturalness, and speaker similarity, respectively.\nRecently, AutoVC has been proposed, which is the only framework for zero-shot conversion VC Qian et al. (2019). To the best of authors’ knowledge, this is the first GAN-based framework to achieve zero-shot VC. To do the zero-shot conversion, AutoVC requires few samples (20 s) of possible target speakers. However, AdaGAN requires only 3s to 5s of sample speech of the seen or unseen target speaker to extract latent representation for the target speaker in order to generate voices that sound perceptually similar to the target speaker. Moreover, trained AdaGAN architecture can work on any source or target speaker." }, { "heading": "7 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we proposed novel AdaGAN primarily for non-parallel many-to-many VC task. Moreover, we analyzed our proposed architecture w.r.t. current GAN-based state-of-the-art StarGAN-VC method for the same task. We know that the main aim of VC is to convert the source speaker’s voice into the target speaker’s voice while preserving linguistic content. To achieve this, we have used the style transfer algorithm along with the adversarial training. AdaGAN transfers the style of the target speaker into the voice of a source speaker without using any feature-based mapping between the linguistic content of the source speaker’s speech. For this task, AdaGAN uses only one generator and one discriminator, which leads to less complexity. AdaGAN is almost 88.6% computationally less complex than the StarGAN-VC. We have performed subjective analysis on the VCTK corpus to show the efficiency of the proposed method. We can clearly see that AdaGAN gives superior results in the subjective evaluations compared to StarGAN-VC.\nMotivated by the work of AutoVC, we also extended the concept of AdaGAN for the zero-shot conversion as an independent study and reported results. AdaGAN is the first GAN-based framework for zero-shot VC. In the future, we plan to explore high-quality vocoders, namely, WaveNet, for further improvement in voice quality. The perceptual difference observed between the estimated and the ground truth indicates the need for exploring better objective function that can perceptually optimize the network parameters of GAN-based architectures, which also forms our immediate future work." }, { "heading": "A MATHEMATICAL PROOF OF ADAGAN CONCEPT AND DIFFERENT LOSS FUNCTIONS", "text": "Here, we will first give the proof of the concept behind AdaGAN and later we will prove how our loss functions help us to satisfy the derived constrained in Theorem 1.\nTheorem 1: Given assumptions in Section 5.1, we can say that there exists a latent space where normalized latent representation of input features will be the same irrespective of speaking style.\nProof: From eq. (1), we can write the goal of AdaGAN as following:\nDKL( pX̂Z1→Z2 (.|Z1, U1, Z2, U2) ‖ pX̂Z2→Z2 (.|Z2, U1, Z2, U ′ 2) ) = 0. (15)\nFrom eq. (15), we can conclude that the output of AdaGAN after conversion using (Z1, U1) and (Z2, U2) is the same as conversion using (Z1, U1) and (Z2, U2). Because in either way, we are transferring speaking style of speaker Z2 to utterance U1. We can say that the output for AdaIN is the same for both the cases in the latent space. Hence, we can write above eq. (15) as:\n=⇒ DKL( pAdaIN (.|Z1, U1, Z2, U2) ‖ pAdaIN (.|Z2, U1, Z2, U ′2) ) = 0, (16)\nwhere pAdaIN (.|.) is the pdf of the latent representation. For given input samples x1 and y1, we can write following term from eq. (16):\npAdaINx1 (.|Z1, U1, Z2, U2) = pAdaINy1 (.|Z2, U1, Z2, U ′ 2), (17)\nFrom Fig. 1, we can write eq. (17) as:\n=⇒ [Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ2(τ) + µ2(τ) = [Sy1(τ)− µ′′2(τ) σ′′2 (τ) ] σ′2(τ) + µ ′ 2(τ),\nwhere τ represents the training iteration. Now, giving limτ→∞ both side we assume that µ′′2(τ) = µ′2(τ) = µ2(τ), and σ ′′ 2 (τ) = σ\n′ 2(τ) = σ2(τ). Therefore,[Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ2(τ) + µ2(τ) = [Sy1(τ)− µ′′2(τ) σ′′2 (τ) ] σ′2(τ) + µ′2(τ) ,\n=⇒ [Sx1(τ)− µ1(τ)\nσ1(τ)\n] = [Sy1(τ)− µ2(τ) σ2(τ) ] . (18)\nAt τ → ∞, the assumptions that made in Section 5.1 are true. Hence, from eq. (18), we can conclude that there exists a latent space where normalized latent representation of input features will be the same irrespective of speaking style.\nTheorem 2: By optimization of minEn,De LCX→Y + LstyX→Y , the assumptions made in Theorem 1 can be satisfied.\nProof: Our objective function is the following:\nmin En,De LCX→Y + LstyX→Y . (19)\nIterate step by step to calculate the term (t2) used in loss function LstyX→Y . Consider, we have the latent representations Sx1 and Sy1 corresponding to the source and target speech, respectively.\nStep 1: [Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ2(τ) + µ2(τ) (Representation of t1),\nStep 2&3: En { De [[Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ2(τ) + µ2(τ) ]} .\nAfter applying decoder and encoder sequentially on latent representation, we will again get back to the same representation. This is ensured by the loss function LCX→Y . Formally, we want to make LCX→Y → 0. Therefore, we can write step 4 as:\nStep 4: [Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ2(τ) + µ2(τ) (i.e., reconstructed t1),\nStep 5: 1\nσ2(τ) [[Sx1(τ)− µ1(τ) σ1(τ) ] σ2(τ) + µ2(τ) − µ2(τ) ] (Normalization with its own (i.e., latent representation in Step 4) µ and σ during AdaIN),\nStep 6: [Sx1(τ)− µ1(τ)\nσ1(τ)\n] (Final output of Step 5),\nStep 7: [Sx1(τ)− µ1(τ)\nσ1(τ)\n] σ′1(τ) + µ ′ 1(τ)\n(Output after de-normalization in AdaIN . Representation of t2), where µ′1 and σ ′ 1 are the mean and standard deviations of the another input source speech, x2. Now, using the mathematical representation of t2, we can write loss function LstyX→Y as:\nLstyX→Y = [(Sx1(τ)− µ1(τ)\nσ1(τ)\n) σ′1(τ) + µ ′ 1(τ)− Sx1(τ) ] . (20)\nAccording to eq. (19), we want to minimize the loss function LstyX→Y . Formally, LstyX→Y → 0. Therefore, we will get µ1 = µ′1, and σ1 = σ ′ 1 to achieve our goal. Hence, mean and standard deviation of the same speaker are constant, and different for different speakers irrespective of the linguistic content. We come to the conclusion that our loss function satisfies the necessary constraints (assumptions) required in proof of Theorem 1." }, { "heading": "B T-SNE VISUALIZATION OF LATENT SPACE LEARNED BY ADAGAN", "text": "As we know, Neural Networks (NNs) are hard to train and optimize. Even if everything has been proven in terms of theoretical proofs, statistical and empirical analysis is required. For this analysis, we have adopted t-SNE visualization. Here, we randomly selected few utterances from two different speakers from the VCTK corpus. Latent representations are extracted for the speech of that speakers, and features are reduced to 2-D using t-SNE. The scatter plot shown in Fig. 6 shows that data points are clustered based on the speaking style. After normalized with their respective means and standard deviations, these distribution overlapped. This shows that the distribution of normalized latent representation captures linguistic information-based features irrespective of speaking style as proved in Theorem 1. Therefore, we can say that AdaGAN, and its losses are efficient for practical purposes." } ]
2,019
null
SP:25106cb1a3e5ead20e58b680eeb6aa361c07e1ff
[ "In ES the goal is to find a distribution pi_theta(x) such that the expected value of f(x) under this distribution is high. This can be optimized with REINFORCE or with more sophisticated methods based on the natural gradient. The functional form of pi_theta is almost always a Gaussian, but this isn't sufficiently flexible (e.g. multi-modal) to provide a good optimization algorithm. In response, the authors advocate for using a flexible family of generative neural networks for pi_theta. Using NICE as a generative model is desirable because it maintains volumes. This means that we can adjust volumes in latent space and this directly corresponds to volumes in x space. Doing so is useful to be able to tune how concentrated the search distribution is and to explicitly reason about the mode of the search distribution.", "As the title of the paper states, this paper tries to improve evolution strategies (ES) using a generative neural network. In the standard ES candidate solution is generated from a multivariate normal distribution, where the parameters of the distribution are adapted during the optimization process. The authors claim that the gaussian distribution, i.e., the ellipsoidal shape of the sampling distribution, is not adequate for the objective functions such as multimodal functions or functions with curved ridge levelsets such as the well-known Rosenbrock functions. The motivation is clearly stated. The technique is interesting and non-trivial. However, the experimental results are not very convincing to conclude that the proposed approach achieves the stated goal. Moreover, this paper may fit more to optimization conferences such as GECCO. " ]
Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions. This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contrast to standard ones (typically Gaussians). We model such distributions with Generative Neural Networks (GNNs) and introduce a new ES algorithm that leverages their expressiveness to accelerate the stochastic search. Because it acts as a plug-in, our approach allows to augment virtually any standard ES algorithm with flexible search distributions. We demonstrate the empirical advantages of this method on a diversity of objective functions.
[]
[ { "authors": [ "Aman Agarwal", "Soumya Basu", "Tobias Schnabel", "Thorsten Joachims" ], "title": "Effective evaluation using logged bandit feedback from multiple loggers", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Shun-Ichi Amari" ], "title": "Natural gradient works efficiently in learning", "venue": "Neural Computation,", "year": 1998 }, { "authors": [ "Anne Auger", "Nikolaus Hansen" ], "title": "A restart cma evolution strategy with increasing population size", "venue": "IEEE congress on evolutionary computation,", "year": 2005 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: Non-Linear Independent Components Estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density Estimation using Real NVP", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking Deep Reinforcement Learning for Continuous Control", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Louis Faury", "Flavian Vasile", "Clément Calauzènes", "Oliver Fercoq" ], "title": "Neural Generative Models for Global Optimization with Gradients", "venue": "arXiv preprint arXiv:1805.08594,", "year": 2018 }, { "authors": [ "Frauke Friedrichs", "Christian Igel" ], "title": "Evolutionary tuning of multiple SVM", "venue": "parameters. Neurocomputing,", "year": 2005 }, { "authors": [ "Tobias Glasmachers", "Tom Schaul", "Sun Yi", "Daan Wierstra", "Jürgen Schmidhuber" ], "title": "Exponential natural evolution strategies", "venue": "In Proceedings of the 12th annual conference on Genetic and evolutionary computation,", "year": 2010 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Dami Choi", "Yuhuai Wu", "Geoff Roeder", "David Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "arXiv preprint arXiv:1711.00123,", "year": 2017 }, { "authors": [ "Nikolaus Hansen" ], "title": "The CMA Evolution Strategy: a tutorial", "venue": "arXiv preprint arXiv:1604.00772,", "year": 2016 }, { "authors": [ "Nikolaus Hansen", "Andreas Ostermeier" ], "title": "Completely derandomized self-adaptation in Evolution Strategies", "venue": "Evolutionary Computation,", "year": 2001 }, { "authors": [ "Nikolaus Hansen", "Anne Auger", "Steffen Finck", "Raymond Ros" ], "title": "Real-parameter black-box optimization benchmarking 2010: Experimental setup", "venue": "PhD thesis,", "year": 2010 }, { "authors": [ "Nikolaus Hansen", "Anne Auger", "Olaf Mersmann", "Tea Tusar", "Dimo Brockhoff" ], "title": "Coco: A platform for comparing continuous optimizers in a black-box setting", "venue": "arXiv preprint arXiv:1603.08785,", "year": 2016 }, { "authors": [ "Donald R Jones", "Matthias Schonlau", "William J Welch" ], "title": "Efficient global optimization of expensive black-box functions", "venue": "Journal of Global optimization,", "year": 1998 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding Variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Guoqing Liu", "Li Zhao", "Feidiao Yang", "Jiang Bian", "Tao Qin", "Nenghai Yu", "Tie-Yan Liu" ], "title": "Trust Region Evolution Strategies", "venue": "Association for the Advancement of Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "CMA-ES for hyperparameter optimization of deep neural networks", "venue": "arXiv preprint arXiv:1604.07269,", "year": 2016 }, { "authors": [ "Ilya Loshchilov", "Marc Schoenauer", "Michele Sebag" ], "title": "Alternative restart strategies for cma-es", "venue": "In International Conference on Parallel Problem Solving from Nature,", "year": 2012 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In ICML Workshop on Deep Learning for Audio, Speech and Language Processing. Citeseer,", "year": 2013 }, { "authors": [ "David JC MacKay" ], "title": "Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers", "venue": "Detectors and Associated Equipment,", "year": 1995 }, { "authors": [ "James Martens" ], "title": "Deep Learning via Hessian-free Optimization", "venue": "In International Conference of Machine Learning,", "year": 2010 }, { "authors": [ "Bogdan Mazoure", "Thang Doan", "Audrey Durand", "R Devon Hjelm", "Joelle Pineau" ], "title": "Leveraging exploration in off-policy algorithms via normalizing flows", "venue": null, "year": 1905 }, { "authors": [ "Jorge J Moré", "Stefan M Wild" ], "title": "Benchmarking derivative-free optimization algorithms", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "Thomas Nedelec", "Nicolas Le Roux", "Vianney Perchet" ], "title": "A Comparative Study of Counterfactual Estimators", "venue": "arXiv preprint arXiv:1704.00773,", "year": 2017 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Leonid Peshkin", "Christian R Shelton" ], "title": "Learning from scarce experience", "venue": "In Proceedings of the Nineteenth International Conference on Machine Learning,", "year": 2002 }, { "authors": [ "Aravind Rajeswaran", "Kendall Lowrey", "Emanuel V Todorov", "Sham M Kakade" ], "title": "Towards generalization and simplicity in continuous control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ingo Rechenberg" ], "title": "Evolutionsstrategien. In Simulationsmethoden in der Medizin und Biologie, pp. 83–114", "venue": null, "year": 1978 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Oren Rippel", "Ryan Prescott Adams" ], "title": "High-dimensional Probability Estimation with Deep Density Models", "venue": "arXiv preprint arXiv:1302.5125,", "year": 2013 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution Strategies as a scalable alternative to Reinforcement Learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Tom Schaul", "Tobias Glasmachers", "Jürgen Schmidhuber" ], "title": "High dimensions and heavy tails for Natural Evolution Strategies", "venue": "In Proceedings of the 13th annual conference on Genetic and Evolutionary Computation,", "year": 2011 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Hans-Paul Schwefel" ], "title": "Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie: mit einer vergleichenden Einführung in die Hill-Climbing-und Zufallsstrategie", "venue": null, "year": 1977 }, { "authors": [ "Bobak Shahriari", "Kevin Swersky", "Ziyu Wang", "Ryan P Adams", "Nando De Freitas" ], "title": "Taking the human out of the loop: A review of Bayesian Optimization", "venue": "Proceedings of the IEEE,", "year": 2016 }, { "authors": [ "Akash Srivastava", "Lazar Valkoz", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in GANs using Implicit Variational Learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Counterfactual Risk Minimization: Learning from Logged Bandit Feedback", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Patrick Nadeem Ward", "Ariella Smofsky", "Avishek Joey Bose" ], "title": "Improving exploration in softactor-critic with normalizing flows policies", "venue": "arXiv preprint arXiv:1906.02771,", "year": 2019 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Juergen Schmidhuber" ], "title": "Natural Evolution Strategies", "venue": "In Evolutionary omputation,", "year": 2008 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist Reinforcement Learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Wierstra" ], "title": "2008)) for both its baselines versions and its inner optimization parts in GNN-xNES. The population size λ is one such hyper-parameters, and is therefore set to λ = 4 + b3 log(d)c. Also, as it is classically done in ES algorithms, we use a rank-based fitness shaping, designed to make the algorithm invariant with respect to order-preserving cost transformations", "venue": null, "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "We are interested in the global minimization of a black-box objective function, only accessible through a zeroth-order oracle. In many instances of this problem the objective is expensive to evaluate, which excludes brute force methods as a reasonable mean of optimization. Also, as the objective is potentially non-convex and multi-modal, its global optimization cannot be done greedily but requires a careful balance between exploitation and exploration of the optimization landscape (the surface defined by the objective).\nThe family of algorithms used to tackle such a problem is usually dictated by the cost of one evaluation of the objective function (or equivalently, by the maximum number of function evaluations that are reasonable to make) and by a precision requirement. For instance, Bayesian Optimization (Jones et al., 1998; Shahriari et al., 2016) targets problems of very high evaluation cost, where the global minimum must be approximately discovered after a few hundreds of function evaluations. When aiming for a higher precision and hence having a larger budget (e.g. thousands of function evaluations), a popular algorithm class is the one of Evolutionary Strategies (ES) (Rechenberg, 1978; Schwefel, 1977), a family of heuristic search procedures.\nES algorithms rely on a search distribution, which role is to propose queries of potentially small value of the objective function. This search distribution is almost always chosen to be a multivariate Gaussian. It is namely the case of the Covariance Matrix Adaptation Evolution Strategies (CMA-ES) (Hansen & Ostermeier, 2001), a state-of-the-art ES algorithm made popular in the machine learning community by its good results on hyper-parameter tuning (Friedrichs & Igel, 2005; Loshchilov & Hutter, 2016). It is also the case for Natural Evolution Strategies (NES) (Wierstra et al., 2008) algorithms, which were recently used for direct policy search in Reinforcement Learning (RL) and shown to compete with state-of-the-art MDP-based RL techniques (Salimans et al., 2017). Occasionally, other distributions have been used; e.g. fat-tails distributions like the Cauchy were shown to outperform the Gaussian for highly multi-modal objectives (Schaul et al., 2011).\nWe argue in this paper that in ES algorithms, the choice of a standard parametric search distribution (Gaussian, Cauchy, ..) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum. To overcome the limitations of classical parametric search distributions, we propose using flexible distributions generated by bijective Generative Neural Networks (GNNs), with computable and differentiable log-probabilities. We discuss why common existing optimization methods in ES algorithms cannot be directly used to train such models and design a tailored algorithm that efficiently train GNNs for an ES objective. We show how this new algorithm can readily incorporate existing ES algorithms that operates on simple search distributions,\nAlgorithm 1: Generic ES procedure input: zeroth-order oracle on f , distribution π0, population size λ repeat\n(Sampling) Sample x1, . . . , xλ i.i.d∼ πt (Evaluation) Evaluate f(x1), . . . , f(xn). (Update) Update πt to produce x of potentially smaller objective values.\nuntil convergence;\nlike the Gaussian. On a variety of objective functions, we show that this extension can significantly accelerate ES algorithms.\nWe formally introduce the problem and provide background on Evolutionary Strategies in Section 2. We discuss the role of GNNs in generating flexible search distributions in Section 3. We explain why usual algorithms fail to train GNNs for an ES objective and introduce a new algorithm in Section 4. Finally we report experimental results in Section 5." }, { "heading": "2 PRELIMINARIES", "text": "In what follows, the real-valued objective function f is defined over a compact X and π will generically denote a probability density function overX . We consider the global optimization of f :\nx∗ ∈ argmin x∈X f(x) (1)" }, { "heading": "2.1 EVOLUTIONARY STRATEGIES", "text": "The generic procedure followed by ES algorithms is presented in Algorithm 1. To make the update step tractable, the search distribution is tied to a family of distributions and parametrized by a realvalued parameter vector θ (e.g. the mean and covariance matrix of a Gaussian), and is referred to as πθ. This update step constitutes the main difference between ES algorithms.\nNatural Evolution Strategies One principled way to perform that update is to minimize the expected objective value over samples x drawn from πθ. Indeed, when the search distribution is parametric and tied to a parameter θ, this objective can be differentiated with respect to θ thanks to the log-trick:\nJ(θ) , Eπθ [f(x)] and ∂J(θ)\n∂θ = Eπθ\n[ f(x) ∂ log πθ(x)\n∂θ\n] (2)\nThis quantity can be approximated from samples - it is known as the score-function or REINFORCE (Williams, 1992) estimator, and provides a direction of update for θ. Unfortunately, naively following a stochastic version of the gradient (2) – a procedure called Plain Gradient Evolutionary Strategies (PGES) – is known to be highly ineffective. PGES main limitation resides in its instability when the search distribution is concentrating, making it unable to precisely locate any local minimum. To improve over the PGES algorithm the authors of Wierstra et al. (2008) proposed to descend J(θ) along its natural gradient (Amari, 1998). More precisely, they introduce a trust-region optimization scheme to limit the instability of PGES, and minimize a linear approximation of J(θ) under a Kullback-Leibler (KL) divergence constraint:\nargmin δθ\nJ(θ + δθ) ' J(θ) + δθT∇θJ(θ) s.t KL(πθ+δθ||πθ) ≤ (3)\nTo avoid solving analytically the trust region problem (3), Wierstra et al. (2008) shows that its solution can be approximated by:\nδθ∗ ∝ −F−1θ ∇θJ(θ) where Fθ = Eπθ [ ∇θ log πθ(x)∇θ log πθ(x)T ] (4)\nis the Fischer Information Matrix (FIM) of πθ. The parameter θ is therefore not updated along the negative gradient of J but rather along F−1θ ∇θJ(θ), a quantity known as the natural gradient. The FIM Fθ is known analytically when πθ is a multivariate Gaussian and the resulting algorithm, Exponential Natural Evolutionary Strategies (xNES) (Glasmachers et al., 2010) has been shown to reach state-of-the-art performances on a large ES benchmark.\nCMA-ES Naturally, there exist other strategies to update the search distribution πθ. For instance, CMA-ES relies on a variety of heuristic mechanisms like covariance matrix adaptation and evolution paths, but is only defined when πθ is a multivariate Gaussian. Explaining such mechanisms would be out of the scope of this paper, but the interested reader is referred to the work of Hansen (2016) for a detailed tutorial on CMA-ES." }, { "heading": "2.2 LIMITATIONS OF CLASSICAL SEARCH DISTRIBUTIONS", "text": "ES implicitly balance the need for exploration and exploitation of the optimization landscape. The exploitation phase consists in updating the search distribution, and exploration happens when samples are drawn from the search distribution’s tails. The key role of the search distribution is therefore to produce a support adapted to the landscape’s structure, so that new points are likely to improve over previous samples.\nWe argue here that the choice of a given parametric distribution (the multivariate Gaussian distribution being overwhelmingly represented in state-of-the-art ES algorithms) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum. For instance, a Gaussian distribution is not adapted to navigate a curved valley because of its inability to continuously curve its density. This lack of flexibility will lead it to drastically reduce its entropy, until the curved valley looks locally straight. At this point, the ES algorithm resembles a hill-climber and barely takes advantage of the exploration abilities of the search distribution. An illustration of this phenomenon is presented in Figure 2 on the Rosenbrock function. Another limitation of classical search distribution is their inability to follow multiple hypothesis, that is to explore at the same time different local minima. Even if mixture models can show such flexibility, hyper-parameters like the number of mixtures have optimal values that are impossible to guess a priori.\nWe want to introduce flexible search distributions to overcome these limitations. Such distributions should, despite their expressiveness, be easily trainable. We should also be concerned when designing them with their role in the exploration/exploitation trade off: a search distribution with too much capacity could over-fit some seemingly good samples, leading to premature convergence. To sum-up, we want to design search-distributions that are:\n• more flexible than classical distributions • yet easily trainable • while keeping control over the exploration / exploitation trade-off\nIn the following section, we carefully investigate the class of Generative Neural Networks (GNNs) to find a parametric class of distributions satisfying such properties." }, { "heading": "3 FLEXIBLE SEARCH DISTRIBUTIONS WITH GNNS", "text": "Generative Neural Networks (MacKay, 1995) have been studied in the context of density estimation and shown to be able to model complex and highly multimodal distributions (Srivastava et al., 2017). We propose here to leverage their expressiveness for ES, and train them in a principled way thanks to the ES objective:\nJ(π) = Eπ [f(x)]\nAs discussed in Section 2, optimizing J(π) with gradient-based methods is possible through the score-function estimator, which requires to be able to compute and efficiently differentiate the logprobabilities of π." }, { "heading": "3.1 GNN BACKGROUND", "text": "The core idea behind a GNN is to map a latent variable z ∈ Z drawn from a known distribution νω to an output variable x = gη(z) where gη is the forward-pass of a neural network. The parameter η represents the weights of this neural network while ω describe the degrees of freedom of the latent space distribution νω . We denote θ=(ω, η) and πθ(x) the density of the output variable x.\nFor general neural network architectures, it is impossible to compute πθ(x) for samples x drawn from the GNN. This is namely why their are often trained with adversarial methods (Goodfellow et al., 2014) for sample generation purposes, bypassing the need of computing densities, but at the expense of a good density estimation (mode-dropping). An alternative to adversarial methods was proposed with variational auto-encoders (Kingma & Welling, 2013) however at the cost of learning two neural networks (an encoder and a decoder). A less computationally expensive method consists in restricting the possible architectures to build bijective GNNs, also known as Normalizing Flows (NF) (Rezende & Mohamed, 2015; Papamakarios et al., 2017), which allows the exact computation of the distribution’s density. Indeed, if gη is a bijection from Z to X with inverse hη , g−1η , the change of variable formula provides a way to compute πθ(x):\nπθ(x) = νω(hη(x)) · ∣∣∣∣∂hη(x)∂x ∣∣∣∣ (5) To have a tractable density one therefore needs to ensure that the determinant of the Jacobian |∂hη(x)/∂x| is easily computable. Several models satisfying these two properties (i.e bijectivity and computable Jacobian) have been proposed for density estimation (Rippel & Adams, 2013; Dinh et al., 2014; 2016), and proved their expressiveness despite their relatively simple structure.\nNFs therefore answer two of our needs when building our new search distribution: flexibility and easiness to train. In this work, we will focus on one NF model: the Non-Linear Independent Component Estimation (Dinh et al., 2014) (NICE) model, for its numerical stability and volume preserving properties." }, { "heading": "3.2 NICE MODEL", "text": "The authors of NICE proposed to build complex yet invertible transformations through the use of additive coupling layers. An additive coupling layer leaves half of its input unchanged, and adds a non-linear transformation of the first half to the second half. More formally, by noting v = [v1, v2] the output of a coupling layer and u = [u1, u2] its input, one has:\nv1 = u1 and v2 = u2 + t(u1) (6)\nwhere t is an arbitrarily complex transformation - modelled by a Multi-Layer Perceptron (MLP) with learnable weights and biases. This transformation has unit Jacobian determinant and is easily invertible: u1 = v1 and u2 = v2 − t(v1) (7) and only requires a feed-forward pass on the MLP t. The choice of the decomposition u = [u1, u2] can be arbitrary, and is performed by applying a binary filter to the input. By stacking additive coupling layers, one can create complex distributions, and the inversion of the resulting mapping is independent of the complexity of the neural networks t. The density of the resulting distribution is readily computable thanks to the inverse transform theorem (5)." }, { "heading": "3.3 VOLUME PRESERVING PROPERTIES", "text": "The transformation induced by NICE is volume preserving (it has a unitary Jacobian determinant). This is quite desirable in a ES context, as the role of concentrating the distribution on a minimum can be left to the latent space distribution νω . The role of the additive coupling layers is therefore only to introduce non-linearities in the inverse transform hη so that the distribution is better adapted\nto the optimization landscape. The fact that this fit is volume-preserving (every subset of the latent space has an image in the data space with the same probability mass) encourages the search distribution to align its tails with regions of small value of the optimization landscape, which is likely to improve the quality of future exploration steps. The NICE model therefore fits perfectly our needs for a flexible search distribution that is easy to train, and that provides enough control on the exploration / exploitation trade-off. Other bijective GNN models like the Real-NVP (Dinh et al., 2016) introduce non-volume preserving transformations, which cannot provide such a control. In practice, we observed that using such transformations for ES led to early concentration and premature convergence." }, { "heading": "4 AN EFFICIENT TRAINING ALGORITHM", "text": "We are now equipped with enough tools to use GNNs for ES: an adapted model (NICE) for our search distribution πθ, and an objective to train it with:\nJ(θ) = Eπθ [f(x)] (8) Here, θ describes jointly the free parameters of the latent distribution νω and η, the weights and biases of the MLPs forming the additive coupling layers.\nWe start this section by explaining why existing training strategies based on the objective (8) are not sufficient to truly leverage the flexibility of GNNs for ES, before introducing a new algorithm tailored for this task." }, { "heading": "4.1 LIMITATIONS OF EXISTING TRAINING STRATEGIES", "text": "We found that the PGES algorithm (naive stochastic gradient descent of (8) with the score-function estimator) applied to the NICE distribution suffers from the same limitations as when applied to the Gaussian; it is inable to precisely locate any local minimum. As for the Gaussian, training the NICE distribution for ES requires employing more sophisticated algorithms - such as NES.\nHowever, using the natural gradient for the GNNs distributions is not trivial. First the Fischer Information Matrix Fθ is not known analytically and must be estimated via Monte-Carlo sampling, thereby introducing approximation errors. Also, we found that the approximations justifying to follow the descent direction provided by the natural gradient are not adapted to the NICE distribution. Indeed, the assumption behind the NES update (4) is that the loss J(θ) can be (locally) well approximated by the quadratic objective:\nJ(θ + δθ) = J(θ) + δθT∇θJ(θ) + γ\n2 δθTFθδθ (9)\nwhere γ is a given non-negative Lagrange multiplier. For NICE, given the highly non-linear nature of πθ this approximation is bound to fail even close to the current parameter θ and will lead to spurious updates. A classical technique (Martens, 2010) to avoid such updates is to artificially increase the curvature of the quadratic term, and is known as damping. Practically, this implies using Fθ + βI instead of Fθ as the local curvature metric, with β a non-negative damping parameter.\nWe found that to ensure continuous decrease of J(θ), and because of its highly non-linear nature when using the GNNs, the damping parameter β has to be set to such high values that the modifications of the search distribution are too small to quickly make progress and by no means reaches state-of-the-art performances. We observed that even if the training of the additive coupling layers is performed correctly (i.e the distribution has the correct shape), high damping of the latent space parameters prevents the distribution from quickly concentrating when a minimum is found.\nIt is unclear how the damping parameter should be adapted to avoid spurious update, while still allowing the distribution to make large step in the latent space and ensure fast concentration when needed. In the following, we present an alternated minimization scheme to bypass the issues raised by natural gradient training for GNN distributions in a ES context." }, { "heading": "4.2 ALTERNATING MINIMIZATION", "text": "So far, we used the parameter θ to describe both ω and η (respectively, the free parameters of the latent space distribution νω and the degrees of freedom of the non-linear mapping gη), and the\noptimization over all these parameters was performed jointly. Separating the roles of ω and η, the initial objective (2) can be rewritten as follows:\nJ(θ) = Ez∼νω [f(gη(z))] = J(ω, η) (10)\nTherefore, the initial objective can be rewritten as the expected value of samples drawn from the latent distribution, under the objective f ◦ gη - that is, the representation of the objective function f in the latent space. If νω is a standard distribution (i.e efficiently trainable with the natural gradient) and f ◦ gη is a well structured function (i.e one for which νω is an efficient search distribution), then the single optimization of ω by classical methods (such as the natural gradient) should avoid the limitations discussed in 2.2. This new representation motivates the design of a new training algorithm that optimizes the parameters ω and η separately.\nAlternating Minimization In the following, we will replace the notation πθ with πω,η to refer to the NICE distribution with parameter θ = (ω, η). We want to optimize ω and η in an alternate fashion, which means performing the following updates at every step of the ES procedure:\nωt+1 = argmin ω J(ω, ηt) (11a)\nηt+1 = argmin η J(ωt+1, η) (11b)\nThis means that at iteration t, samples are drawn from πωt,ηt and serve to first optimize the latent space distribution parameters ω, and then the additive coupling layers parameters η. For the following iteration, the population is sampled under πωt+1,ηt+1 .\nThe update (11a) of the latent space parameters is naturally derived from the new representation (10) of the initial objective. Indeed, ω can be updated via natural gradient ascent of J(ω, ηt) - that is with keeping η = ηt fixed. Practically, this therefore reduces to applying a NES algorithm to the latent distribution νω on the modified objective function f ◦ gηt . Once the latent space parameters updated, the coupling layers parameters should be optimized with respect to:\nJ(ωt+1, η) = Eπωt+1,η [f(x)] (12)\nAt this stage, the only available samples are drawn under πωt,ηt . To estimate, based on these samples, expectations under πωt+1,ηt one must use importance propensity scores:\nJ(ωt+1, η) = Eπωt,ηt\n[ f(x) πωt+1,η(x)\nπωt,ηt(x)\n] (13)\nThe straightforward minimization of this off-line objective is known to lead to degeneracies (Swaminathan & Joachims, 2015, Section 4), and must therefore be regularized. For our application, it is also desirable to make sure that the update η does not undo the progress made in the latent space - in other words, we want to regularize the change in f ◦ gη . To that extent, we adopt a technique proposed in Schulman et al. (2017) and minimize a modification on the initial objective with clipped propensity weights:\nηt+1 = argmin η Eπωt+1,ηt\n[ f(x)clipε ( πωt+1,η(x)\nπωt+1,ηt(x)\n)] (14)\nclipε(x) clips the value of x between 1 − and 1 + . The parameter ε is an hyper-parameter that controls the change in distribution, and the program (14) can be efficiently solved via a gradient descent type algorithm, such as Adam (Kingma & Ba, 2014).\nTo sum up, we propose optimizing the latent distribution and the coupling layers separately. The latent space is optimized by natural gradient descent, and the coupling layers via an off-policy objective with clipped propensity weights. We call this algorithm GNN-ES for Generative Neural Networks Evolutionary Strategies.\nLatent space optimization It turns out the GNN-ES can be readily modified to incorporate virtually any existing ES algorithms that operates on the simple distribution νω . For instance, if νω is set to be a multivariate Gaussian with learnable mean and covariance matrix, the latent space optimization (11a) can be performed by either xNES or CMA-ES. This holds for any standard distribution νω and any ES algorithm operating on that distribution. This remark allows us to place GNN-ES in a more general framework and to understand it as a way to improve existing ES algorithm, by providing a principled way to learn complex, non-linear transformations on top of rather standard search distributions (like the Gaussian). In what follows, we will use the GNN prefix in front of existing ES algorithm to describe its augmented version with our algorithm, working as a plug-in. Pseudo-code for this general algorithm can be found in Appendix B." }, { "heading": "4.3 ADDITIONAL TOOLS", "text": "Using historic data ES algorithms typically use small populations of samples to estimate expectations. Such small sample sizes don’t allow for enough data exposure for the GNN to build a meaningful transformation gη . To circumvent this problem, we augment the off-line program (14) with samples for past generations thanks to the fused importance sampling estimator (Peshkin & Shelton, 2002). This technique is classical in similar settings like MDP-based reinforcement learning and counterfactual reasoning (Nedelec et al., 2017; Agarwal et al., 2017) and proves to be essential for our problem. Formally, for a given horizon T that controls how far we look in the past, this amounts to storing the samples x drawn from πθt−T+1 , . . . , πθt (as well as their respective scores) in a buffer HT . The objective (13) can then be rewritten as:\nEπωt,ηt\n[ f(x) πωt+1,η(x)\nπωt,ηt(x)\n] = T · Ex,f(x)∈HT [ f(x)\nπωt+1,η(x)\nπθt−T+1(x) + . . .+ πθt(x)\n] (15)\nThis technique allows to increase the data exposure of the GNN by using past samples (and therefore does not require additional function evaluations) and to reduce the variance of the off-line estimator of the original expectation (12) (Nedelec et al., 2017). To control the change in distribution, the fused propensity weights can then be clipped in a similar fashion than in the program (14).\nMode preserving properties To achieve improved exploration, the search distribution should align its tails with the level sets of the objective function. This is not guaranteed when performing the update step (14) since the GNN’s update could simply move the mean of the search distribution without shaping the tails. One way to encourage the GNN’s capacity to be allocated to the tails is to impose a mode-preserving property. If µ denotes the location of a mode of the latent distribution, then the mode of the distribution πθ generated by the NICE model is located in gη(µ) (see Appendix A for the proof). It is therefore easy to build a map fη based on the initial gη that is mode-preserving:\nfη(z) , gη(z)− gη(µ) + fηt(µ) (16)\nwhere µt denotes the mode of the latent distribution νω at iteration t. Defined as such, fη preserves the mode of the previous search distribution (since fηt+1(µ) = fηt(µ)), is trivially still a bijection and remains volume preserving. Using the push-forward map fη instead of gη , we explicitly push the flexibility brought by the GNN to impact only the tails of the search distribution. As detailed in an ablation study presented in Appendix F, this additional tool turns out to be essential in order to use GNNs for ES." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In all that follows, we build the NICE model with three coupling layers. Each coupling layer’s nonlinear mapping t is built with a one hidden layer MLP, with 128 neurons and leaky ReLU (Maas et al., 2013) activation functions. This architecture is kept constant in all our experiments." }, { "heading": "5.1 VISUALIZATION", "text": "We present here two-dimensional visualizations of the behavior of a GNN distribution trained with GNN-xNES - the latent distribution is therefore Gaussian. Figure 3a displays the density level lines\nglobal minimum\n(a) Data space\nglobal minimum\n(b) Latent space\nFigure 3: Rosenbrock\nglobal minimum\n(a) Data space\nglobal minimum\n(b) Latent space\nFigure 4: Rastrigin\nDensity level curves (dotted lines) in the data space and in the latent space\nof the resulting search distribution on the Rosenbrock function. Figure 3b displays the density level lines of the latent distribution, as well as the learned representation of the objective in the latent space. The search distribution is able to have curved density isolines, enabling better exploration. In the latent space, the global minimum can be reached without navigating a curved valley. Figures 4a and 4b provide similar visualizations on the Rastrigin function, a highly multimodal but symmetric objective. The GNN lowers the barriers between local minima, making it easier to escape a local minimum to the global minimum." }, { "heading": "5.2 SYNTHETIC OBJECTIVES", "text": "Experimental set-up We present experiments on both unimodal and multimodal objectives for xNES and GNN-xNES. We use the official implementation of xNES1 with default hyper-parameters (such as the population size λ), both as a baseline and as an inner optimization method for GNNxNES. All experiments are run on the COmparing Continous Optimizers (COCO) (Hansen et al., 2016) platform, a popular framework for comparing black-box optimization algorithms. It namely allows to benchmark different algorithms on translated and rotated versions of the same objectives, in order to evaluate multiple configurations with different global minimum positions. We compare xNES and GNN-xNES on functions from the 2018 Black-Box Optimization Benchmark (BBOB) (Hansen et al., 2010) suite. When comparing these two algorithms, we impose that their initial search distributions are close in order to ensure fair comparison. We insist on the fact that the xNES algorithm has the exact same configuration whether it is used by itself or as an inner-optimization algorithm for GNN-xNES. Further experimental details, including additional hyper-parameters value for GNN-xNES are provided in Appendix C.\nUnimodal landscapes We run the different algorithms on two unimodal landscapes where we expect GNN search distributions to bring a significant improvement over the Gaussian - as discussed in 2.2. These objectives functions are the Rotated Rosenbrock function (a curved valley with high conditioning) and the Bent Cigar (an asymmetric and curved Cigar function). Extensive details on these objective functions can be found in the BBOB documentation (Hansen et al., 2010). Results on additional unimodal functions can be found in Appendix E.\nPerformance is measured through Empirical Cumulative Distribution Functions (ECDFs) of the runtime, also known as data profiles (Moré & Wild, 2009). Such curves report the fraction of problems solved as a function of the number of objective evaluations. For a given precision ∆, a problem is said to be solved if the best function evaluation made so far is smaller than f(x∗) + ∆. We create 200 problems, equally spaced on a log-scale from ∆ = 102 to ∆ = 10−5 and, as in the COCO framework, aggregate them over 15 function instances. Results are presented in Figure 5 for the two benchmark functions and in dimensions d = 2, 5, 10.\nMultimodal landscapes We now compare the performances of the different algorithms on a collection of three multimodal objectives: the Rastrigin function, the Griewank-Rosenbrock function and the Schwefel function. Extensive details about these objectives can be found in Hansen et al. (2010).\n1available in the PyBrain library (Schaul et al., 2010)\nWhen using ES algorithms to optimize multimodal functions, it is usual to augment them with restart strategies (Hansen, 2016). When convergence is detected, the search distribution is re-initialized in order to search another part of the landscape, and often the population size is increased. This allows to fairly compared algorithms that converge fast to potentially bad local minima, and algorithms that converges slower to better minima. Their exist a large variety of restart strategies (Loshchilov et al., 2012; Auger & Hansen, 2005); as the official implementation of xNES is not equipped with a default one, we trigger a restart whenever the algorithm makes no progress for more than 30× d iterations. The standard deviation of the search distribution is set back to 1, and its mean sampled uniformly within the compact X of interest (defined by the COCO framework). At each restart, the population size of the algorithm is multiplied by 2, as in Auger & Hansen (2005). This restart strategy is used for both xNES and GNN-xNES.\nWe measure performance as the number of functions evaluations to find an objective value smaller than f(x∗) + 10−5 within a budget of d × 105 function evaluations, averaged over 15 function instances. When an algorithm is not able to discover the global minimum within the given budget, we use the maximum number of evaluations as its performance. For visualization purposes, this measure of performance is divided by d2. Results are reported in Figure 6. On all objectives, and for all dimensions, GNN-xNES discovers (in average) the global minimum faster than xNES. Additional results on others multimodal functions are presented in Appendix E." }, { "heading": "5.3 REINFORCEMENT LEARNING EXPERIMENTS", "text": "The goal of this section is to present additional comparison between xNES and GNN-xNES on RL-based objective functions - less synthetic than the previously considered BBOB functions. ES algorithms have recently been used for direct policy search in Reinforcement Learning (RL) and shown to reach performances comparable with state-of-the-art MDP-based techniques (Liu et al., 2019; Salimans et al., 2017). Direct Policy Search ignores the MDP structure of the RL environment and rather considers it as a black-box. The search for the optimal policy is performed directly in parameter space to maximize the average reward per trajectory:\nf(x) = Eτ∼px ∑ j∈τ rj (17) where px is the distribution of trajectories induced by the policy (the state-conditional distribution over actions) parametrized by x, and r the rewards generated by the environment. The objective (17) can readily be approximated from samples by simply rolling out M trajectories, and optimized using ES. In our experiments2, we set M = 10 and optimize deterministic linear policies (as in Rajeswaran et al. (2017)).\nIn Figures 7a and 7b we report results of the GNN-xNES algorithm compared to xNES, when run on the Mujoco locomotion tasks Swimmer and InvertedDoublePendulum, both from the OpenAI Gym (Brockman et al., 2016). Performance is measured by the average reward per trajectory as a function of the number of evaluations of the objective f . Results are averaged over 5 random seeds (ruling the initialization of the environment and the initial distribution over the policy parameters x). In all three environments, GNN-xNES discovers behaviors of high rewards faster than xNES." }, { "heading": "6 CONCLUSION", "text": "In this work, we motivate the use of GNNs for improving Evolutionary Strategies by pinpointing the limitations of classical search distributions, commonly used by standard ES algorithms. We propose a new algorithm that leverages the high flexibility of distributions generated by bijective GNNs with an ES objective. We highlight that this algorithm can be seen as a plug-in extension to existing ES algorithms, and therefore can virtually incorporate any of them. Finally, we show its empirical advantages across a diversity of synthetic objective functions, as well as from objectives coming from Reinforcement Learning. Beyond the proposal of this algorithm, we believe that our work highlights the role of expressiveness in exploration for optimization tasks. This idea could be leverage in other settings where exploration is crucial, such a MDP-based policy search methods. An interesting line of future work could focus on optimizing GNN-based conditional distribution for RL tasks - an idea already developed in Ward et al. (2019); Mazoure et al. (2019). Other possible extensions to our work could focus on investigating first-order and mixed oracles, such as in Grathwohl et al. (2017); Faury et al. (2018).\n2We used the rllab library (Duan et al., 2016) for our experiments." }, { "heading": "A COMPUTING THE MODE OF THE SEARCH DISTRIBUTION", "text": "We prove here the fact that if µ denotes the location of the mode of the latent distribution νω , then gη(µ) is a mode for πω,η. Indeed, under reasonable smoothness assumptions, one has that y is a mode for πω,η if and only if:\n∂πω,η(x)\n∂x ∣∣∣∣ x=y = 0 (18)\nSince πω,η(x) = νω(hη(x)), this is therefore equivalent to:\n∂hη(x)\n∂x ∣∣∣∣ x=y · ∂νω(z) ∂z ∣∣∣∣ z=hη(y) = 0 (19)\nIn the NICE model, we have that ∣∣∣∂hη(x)∂x ∣∣∣ = 1 for all x hence the matrix ∂hη(x)∂x ∣∣∣\nx=y is invertible\nand its kernel is reduced to the null vector. Therefore:\n∂νω(z)\n∂z ∣∣∣∣ z=hη(y) = 0 (20)\nand therefore µ = hη(y) by definition of µ (the only critical point of νω). Hence since h−1η = gη , we have that y = gη(µ) which concludes the proof." }, { "heading": "B ALGORITHM PSEUDO-CODE", "text": "We provide below the pseudo-code for the generic algorithm GNN-A-ES, where A is a generic ES algorithm operating on a parametric distribution νω . The additional hyper-parameters are the horizon T as well as the clipping constant ε. The function clip(x, lb, ub) clips the input x between a lower-bound lb and an upper-bound ub.\nAlgorithm 2: GNN-A-ES (ex: GNN-xNES, GNN-CMA-ES) inputs : objective function f , distribution νω and its related ES algorithm A hyper-parameters: clipping constant ε, NICE model architecture, initial parameters ω0, initial weights η0, horizon T , population size λ (Initialization)\nInitialize NICE MLPs weights and biases with η0. LetH be a circular buffer of length T × λ\nwhile not terminate do (Sampling)\nSample Z = {z1, . . . zλ} i.i.d∼ νωt Apply fηt to Z, obtain X = {x1, . . . xλ} i.i.d∼ πωt,ηt Evaluate F = {f(x1), . . . , f(xλ)}. LetH ← H+ {X,F, πωt,ηt}\n(ES update) //One step ES-based optimization of the latent space Apply A to the latent distribution\nωt+1 ← A (νωt , (Z,F ))\n(GNN update)∗ //Many-steps gradient based optimization of the GNN\nηt+1 ' η {∑ x,f∈H f · clip ( πωt+1,η(x)∑ π∈H π(x) , πωt+1,ηt (x)∑ π∈H π(x) · (1− ε), πωt+1,ηt (x)∑ π∈H π(x) · (1 + ε) )}\nend\n∗The (GNN iteration) step can be performed with virtually any gradient descent solver. In all our experiments, we used Adam (Kingma & Ba, 2014) with learning rate 1e-4 for 500 epochs.\nAlgorithm 2 does not detail the mode-preserving addition for the sake of readability and clarity. We provide additional details on this procedure here. Let µt be the mode of the latent distribution νωt . At the (Initialization) step, set α0 = gη0(µ0) where gη(·) is the push-forward map on the NICE model described in Section 3.2. For all round t ≥ 1, let fη(z) = gη(z) − gη(µt) + αt. The variable αt represent the push forward mapping of the latent distribution’s mean under the current model. Every time the latent space is updated - the (ES update) step, let αt+1 = fηt(µt+1). Then, for the (GNN update), optimize the forward-map fη(z) = gη(z) − gη(µt+1) + αt+1. After this update, we have fηt+1(µt+1) = αt+1 = fηt(µt+1), which means that the mode of the search distribution (which is the image of the latent distribution mode) has not been impacted by the GNN update." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "C.1 HYPER-PARAMETERS\nBaselines We use xNES with its default (adapted) hyper-parameters (described in Wierstra et al. (2008)) for both its baselines versions and its inner optimization parts in GNN-xNES. The population size λ is one such hyper-parameters, and is therefore set to λ = 4 + b3 log(d)c. Also, as it is classically done in ES algorithms, we use a rank-based fitness shaping, designed to make the algorithm invariant with respect to order-preserving cost transformations. We use the same fitnessshaping function as in Wierstra et al. (2008).\nGNN-ES Across all experiments, we use the same hyper-parameters for GNN-xNES without fine tuning for each tasks. We use three coupling layers, each with a single hidden layer MLP with 128 hidden neurons and Leaky ReLU activations. The MLPs are initialized via Glorot initialization, and the clipping constant is set to ε = 0.05. The history size T was determined experimentally, and set to T = b3 ∗ (1 + log(d))c. When restarts are used, this history size is divided by the numbers of restart so far (as the population size grows larger).\nC.2 SYNTHETIC OBJECTIVES\nEvery synthetic objective we used in this work was taken from the BBOB2019 benchmark dataset. Their expression as well as additional details on the framework can be found in Hansen et al. (2010; 2016). At the beginning of each experiment, we set the Gaussian search distribution (for xNES) and the Gaussian latent distribution (for GNN-xNES) to a standard normal, with a mean uniformly sampled within the compact X of interest (defined by the COCO framework).\nC.3 RL ENVIRONMENTS\nTable 1 provides details on the RL environment used to compare GNN-xNES and xNES, like the dimensions of the state space S and action spaceA, the number d of the policy’s degrees of freedom and the maximum number of stepsm per trajectory. At the beginning of each experiment, we set the Gaussian search distribution (for xNES) and the Gaussian latent distribution (for GNN-xNES) to a standard normal with zero mean. In this particular case, where the function evaluations are noisy, we kept the default population size of the xNES algorithm." }, { "heading": "D TWO-DIMENSIONAL VISUALIZATIONS", "text": "We provide in Figure 8 additional two-dimensional visualizations of the behavior of GNN-xNES, on the Rosenbrock, Rastrigin, Beale and Bent-Cigar functions. We see that the NICE distributions can\nefficiently fit each optimization landscapes, without having to reduce its entropy like a multivariate normal would." }, { "heading": "E ADDITIONAL RESULTS", "text": "We present here some additional results on some unimodal and multimodal synthetic functions. Figure 9 present ECDFs curve obtained from the Attractive Sector function, a highly asymmetrical function around its global minimum. On such a function, GNN-xNES seems to accelerate xNES in small dimensions, however this speed-up disappears in higher dimensions. Figure 10 presents results on the Rosenbrock function (without random rotations). Again, GNN-xNES accelerates the xNES algorithm. Figure 11 present results on the multimodal functions Gallagher’s Gaussian 101 Peaks and Gallagher’s Gaussian 21 Peaks. Again, GNN-xNES discovers the global minimum faster (on average) than xNES.\nIn our multimodal experiments, we used simulated restarts as a fair mean of comparing different algorithm (this is common practice in order to fairly compare algorithms that converge fast to potentially bad local minima to algorithms that converge slowly to the global minimum). If the empirical results prove that GNN-xNES accelerate xNES in the discovery of the global minimum, it does not prove that GNN-xNES leverages the flexibility of the GNN to detect the global minimum when xNES misses it. In an attempt to prove that it is indeed the case, we report in Table 2 the number of restarts needed by both GNN-xNES and xNES to discover the global minimum on the Rastrigin function (averaged over the 15 randomly initialized run). For this instance, GNN-xNES consistently discovers the global minimum with less restarts than xNES.\nAs detailed in Section 4, one can apply Algorithm 2 as a plug-in to any ES method. So far, we empirically evaluated the benefits of our approach by comparing xNES against its GNN extension (GNN-xNES). We present in Figure 12 additional evaluations obtained by comparing CMA-ES and its GNN extension (denoted GNN-CMA-ES) on the Rosenbrock function in dimension 2,5 and 10. CMA-ES is considered to be the state-of-the-art ES algorithm, and improving its performances is a non-trivial task. On the considered example GNN-CMA-ES improves CMA-ES, highlighting the empirical benefit of our approach for a large class of ES algorithm. One can however observe that the performance boost brought by the GNN extension is milder for GNN-CMA-ES then for GNNxNES. We suspect that this is due to the use of cumulation via an evolution path in CMA-ES3, which basically introduces a momentum-like update when optimizing the latent distribution. While using an evolution path makes a lot of sense when optimizing a stationary objective, it can be quite harmful for non-stationary ones. We therefore believe that the cumulation step in CMA-ES (for the latent distribution) and the GNN optimization (making the objective optimized by CMA-ES in the\n3We used the PyCMA library (Hansen et al., 2019) for these experiments.\nlatent space non-stationary) can lead to conflicting updates and might hinder the benefits brought by the GNN’s additional flexibility. Designing a GNN update strategy complying with the use of evolution paths could therefore be a way of further improving GNN-CMA-ES, and is left for future work." }, { "heading": "F ABLATION STUDY", "text": "We present here an ablation study for two additional tools that we introduced after the alternating optimization view: the mode preserving (16) extension as well as the history augmentation (15). Figure 13 presents ECDFs curves on the Rosenbrock, Rotated Rosenbrock and Bent Cigar functions in 2D, for a version of GNN-xNES that doesn’t use history but only the current population. Using history and therefore exposing the GNN to larger datasets improves the procedure. Figure 14 present similar results on a version of GNN-xNES without the mode preserving property (16). Again, one can notice that ensuring that the GNN training is mode-preserving is crucial to improve experimental results." } ]
2,019
null
SP:d6218fdd95b48f3e69bf12e96f938cecde8ff7ab
[ "The paper proposes a ‘potential flow generator’ that can be seen as a regularizer for traditional GAN losses. It is based on the idea that samples flowing from one distribution to another should follow a minimum travel cost path. This regularization is expressed as an optimal transport problem with a squared Euclidean cost. Authors rely on the dynamic formulation of OT proposed by Benamou and Brenier, 2000. They propose to learn a time-dependent potential field which gradient defines the velocity fields used to drive samples from a source distribution toward a target one. Experiments on a simple 1D case (where the optimal transport map is known), and on images with an MNIST / CelebA qualitative example.", "This is a great paper using optimal transport theory for generative and implicit models. Instead of using general vector fields, the authors apply the potential vector fields in optimal transport theory to design neural networks. The mathematics is correct with convincing examples. This brings an important mathematical connection between fluid dynamics and GANs or implicit models. " ]
We propose a potential flow generator with L2 optimal transport regularity, which can be easily integrated into a wide range of generative models including different versions of GANs and normalizing flow models. With only a slight augmentation to the original generator loss functions, our generator not only tries to transport the input distribution to the target one, but also aims to find the one with minimum L2 transport cost. We show the effectiveness of our method in several 2D problems, and illustrate the concept of “proximity” due to the L2 optimal transport regularity. Subsequently, we demonstrate the effectiveness of the potential flow generator in image translation tasks with unpaired training data from the MNIST dataset and the CelebA dataset with a comparison against vanilla WGAN-GP and CycleGAN.
[]
[ { "authors": [ "REFERENCES Luigi Ambrosio", "Nicola Gigli", "Giuseppe Savaré" ], "title": "Gradient flows: in metric spaces and in the space of probability measures", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Marc G Bellemare", "Ivo Danihelka", "Will Dabney", "Shakir Mohamed", "Balaji Lakshminarayanan", "Stephan Hoyer", "Rémi Munos" ], "title": "The Cramer distance as a solution to biased Wasserstein gradients", "venue": "arXiv preprint arXiv:1705.10743,", "year": 2017 }, { "authors": [ "Jean-David Benamou", "Yann Brenier" ], "title": "A computational fluid mechanics solution to the mongekantorovich mass transfer problem", "venue": "Numerische Mathematik,", "year": 2000 }, { "authors": [ "Yann Brenier" ], "title": "Polar factorization and monotone rearrangement of vector-valued functions", "venue": "Communications on pure and applied mathematics,", "year": 1991 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ishan Deshpande", "Ziyu Zhang", "Alexander G Schwing" ], "title": "Generative modeling using the sliced Wasserstein distance", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Wilfrid Gangbo", "Robert J McCann" ], "title": "The geometry of optimal transportation", "venue": "Acta Mathematica,", "year": 1996 }, { "authors": [ "Matthias Gelbrich" ], "title": "On a formula for the L2 Wasserstein metric between measures on Euclidean and Hilbert spaces", "venue": "Mathematische Nachrichten,", "year": 1990 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ian Jolliffe" ], "title": "Principal component analysis", "venue": null, "year": 2011 }, { "authors": [ "Taeksoo Kim", "Moonsu Cha", "Hyunsoo Kim", "Jung Kwon Lee", "Jiwon Kim" ], "title": "Learning to discover cross-domain relations with generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Isaac E Lagaris", "Aristidis Likas", "Dimitrios I Fotiadis" ], "title": "Artificial neural networks for solving ordinary and partial differential equations", "venue": "IEEE transactions on neural networks,", "year": 1998 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "MNIST handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Jacob Leygonie", "Jennifer She", "Amjad Almahairi", "Sai Rajeswar", "Aaron Courville" ], "title": "Adversarial computation of optimal transport maps", "venue": "arXiv preprint arXiv:1906.09691,", "year": 2019 }, { "authors": [ "Wuchen Li", "Guido Montúfar" ], "title": "Natural gradient via optimal transport", "venue": "Information Geometry,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Andrew L. Maas", "Awni Y. Hannun", "Andrew Y. Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In ICML Workshop on Deep Learning for Audio, Speech and Language Processing,", "year": 2013 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Felix Otto" ], "title": "Viscous fingering: an optimal bound on the growth rate of the mixing zone", "venue": "SIAM Journal on Applied Mathematics,", "year": 1997 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George Em Karniadakis" ], "title": "Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations", "venue": "arXiv preprint arXiv:1711.10561,", "year": 2017 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George Em Karniadakis" ], "title": "Physics informed deep learning (part ii): Data-driven discovery of nonlinear partial differential equations", "venue": "arXiv preprint arXiv:1711.10566,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Tim Salimans", "Han Zhang", "Alec Radford", "Dimitris Metaxas" ], "title": "Improving GANs using optimal transport", "venue": "arXiv preprint arXiv:1803.05573,", "year": 2018 }, { "authors": [ "Vivien Seguy", "Bharath Bhushan Damodaran", "Rémi Flamary", "Nicolas Courty", "Antoine Rolet", "Mathieu Blondel" ], "title": "Large-scale optimal transport and mapping estimation", "venue": "arXiv preprint arXiv:1711.02283,", "year": 2017 }, { "authors": [ "Justin Sirignano", "Konstantinos Spiliopoulos" ], "title": "DGM: A deep learning algorithm for solving partial differential equations", "venue": "Journal of Computational Physics,", "year": 2018 }, { "authors": [ "Alex Tong Lin", "Wuchen Li", "Stanley Osher", "Guido Montúfar" ], "title": "Wasserstein proximal of GANs", "venue": null, "year": 2018 }, { "authors": [ "Giulio Trigila", "Esteban G Tabak" ], "title": "Data-driven optimal transport", "venue": "Communications on Pure and Applied Mathematics,", "year": 2016 }, { "authors": [ "Karren D Yang", "Caroline Uhler" ], "title": "Scalable unbalanced optimal transport using generative adversarial networks", "venue": "arXiv preprint arXiv:1810.11447,", "year": 2018 }, { "authors": [ "Zili Yi", "Hao Zhang", "Ping Tan", "Minglun Gong" ], "title": "DualGAN: Unsupervised dual learning for image-to-image translation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many of the generative models, for example, generative adversarial networks (GANs) (Goodfellow et al., 2014; Arjovsky et al., 2017; Salimans et al., 2018) and normalizing flow models (Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Chen et al., 2018), aim to find a generator that could map the input distribution to the target distribution.\nIn many cases, especially when the input distributions are purely noises, the specific maps between input and output are of little importance as long as the generated distributions match the target ones. However, in other cases like imageto-image translations, where both input and target distributions are distributions of images, the generators are required to have additional regularity such that the input individuals are mapped to the “corresponding” outputs in some sense. If paired input-output samples are provided, Lp penalty could be hybridized into generators loss functions to encourage the output individuals to fit the ground truth (Isola et al., 2017). For the cases without paired data, a popular approach is to introduce another generator and encourage the two generators to be the inverse maps of each other, as in CycleGAN (Zhu et al., 2017), DualGAN (Yi et al., 2017) and DiscoGAN (Kim et al., 2017), etc. However, such a pair of generators is not unique and lacks clear mathematical interpretation about its effectiveness.\nIn this paper we introduce a special generator, i.e., the potential flow generator, with L2 optimal transport regularity. By applying such generator, not only are we trying to find a map from the input distribution to the target one, but we also aim to find the optimal transport map that minimizes the squared Euclidean transport distance. In Figure 1 we provide a schematic comparison between generators with and without optimal transport regularity. While both generators provide a scheme to map from the input distribution to the output distribution, the total squared transport distances in the\nleft generator is larger than that in the right generator. Note that the generator with optimal transport regularity has the characteristic of “proximity”in that the inputs tend to be mapped to nearby outputs. As we will show later, this “proximity” characteristic of L2 optimal transport regularity could be utilized in image translation tasks. Compared with other approaches like CycleGAN, the L2 optimal transport regularity has a much clearer mathematical interpretation.\nThere have been other approaches to learn the optimal transport map in generative models. For example, Seguy et al. (2017) proposed to first learn the regularized optimal transport plan and then the optimal transport map, based on the dual form of regularized optimal transport problem. Also, Yang & Uhler (2018) proposed to learn the unbalanced optimal transport plan in an adversarial way derived from a convex conjugate representation of divergences. In the W2GAN model proposed by Leygonie et al. (2019), the discriminator’s objective is the 2-Wasserstein metric so that the generator is supposed to recover the L2 optimal transport map. All the above approaches need to introduce, and are limited to, specific loss functions to train the generators.\nOur proposed potential flow generator takes a different approach in that with only a slight augmentation to the original generator loss functions, our generator could be integrated into a wide range of generative models with various generator loss functions, including different versions of GANs and normalizing flow models. This simple modification makes our method easy to adopt on various tasks considering the existing rich literature and the future developments of generative models.\nIn Section 2 we present a formal definition of optimal transport map and the motivation to apply L2 optimal transport regularity to generators. In Section 3 we give a detailed formulation of the potential flow generator and the augmentation to the original loss functions. Results are then provided in Section 4. We include the discussion and conclusions in Section 5." }, { "heading": "2 GENERATIVE MODELS AND OPTIMAL TRANSPORT MAP", "text": "First, we introduce the concept of push forward, which will be used extensively in the paper.\nDefinition 1 Given two Polish space X and Y, B(X) and B(Y) the Borel σ-algebra on X and Y, and P(X),P(Y) the set of probability measures on B(X) and B(Y). Let f : X→ Y be a Borel map, and µ ∈ P(X). We define f#µ ∈ P(Y), the push forward of µ through f , by\nf#µ(A) = µ(f−1(A)),∀A ∈ B(Y). (1)\nWith the concept of push forward, we can formulate the goal of GANs and normalizing flow models as to train the generator G such that G#µ is equal to or at least close to ν in some sense, where µ and ν are the input and target distribution, respectively. Usually, the loss functions for training the generators are metrics of closeness that vary for different models. For example, in continuous normalizing flows (Chen et al., 2018), such metric of closeness is DKL(G#µ||ν) or DKL(ν||G#µ). In Wasserstein GANs (WGANs) (Arjovsky et al., 2017), the metric of closeness is the Wasserstein-1 distance between G#µ and ν, which is estimated in a variational form with the discriminator neural network. As a result, the generator and discriminator neural networks are trained in an adversarial way:\nmin G max D is 1-Lipschitz\nEx∼νD(x)− Ez∼µD(G(z)), (2)\nwhere D is the discriminator neural network and the Lipschitz constraint could be imposed via the gradient penalty (Gulrajani et al., 2017), spectral normalization (Miyato et al., 2018), etc.\nNow we introduce the concept of optimal transport map as follows:\nDefinition 2 Given a cost function c : X×Y→ R, and µ ∈ P(X), ν ∈ P(Y), we let T be the set of all transport maps from µ to ν, i.e. T := {f : f#µ = ν}. Monge’s optimal transport problem is to minimize the cost functional C(f) among T, where\nC(f) = Ex∼µc(x, f(x)) (3) and the minimizer f∗ ∈ T is called the optimal transport map.\nIn this paper, we are concerned mostly with the case where X = Y = Rd with L2 transport cost, i.e., the transport c(x,y) = ‖x− y‖2. We assume that µ and ν are absolute continuous w.r.t. Lebesgue\nmeasure, i.e. they have probability density functions. In general, Monge’s problem could be ill-posed in that T could be empty set or there is no minimizer in T. Also, the optimal transport map could be non-unique. However, for the special case we consider, there exists a unique solution to Monge’s problem (Brenier, 1991; Gangbo & McCann, 1996).\nInformally speaking, with L2 transport cost the optimal transport map has the characteristic of “proximity”, i.e. the inputs tend to be mapped to nearby outputs. In image translation tasks, such “proximity” characteristic would be helpful if we could properly embed the images into Euclidean space such that our preferred input-output pairs are close to each other. A similar idea is also proposed in Yang & Uhler (2018) for unbalanced optimal transport. Apart from image translations, the L2 optimal transport problem is important in many other aspects. For example, it is closely related to gradient flow (Ambrosio et al., 2008), Fokker-Planck equations (Santambrogio, 2017), flow in porous medium (Otto, 1997), etc." }, { "heading": "3 POTENTIAL FLOW GENERATOR", "text": "" }, { "heading": "3.1 POTENTIAL FLOW FORMULATION OF OPTIMAL TRANSPORT MAP", "text": "We assume that µ and ν have probability density ρµ and ρν , respectively, and consider all smooth enough density fields ρ(t,x) and velocity fields v(t,x), where t ∈ [0, T ], subject to the continuity equation as well as initial and final conditions\n∂tρ+∇ · (ρv) = 0, ρ(0, ·) = ρµ, ρ(T, ·) = ρν .\n(4)\nThe above equation states that such velocity field will induce a transport map: we can construct an ordinary differential equation (ODE)\ndu dt = v(t,u), (5)\nand the map from the initial point to the final point gives the transport map from µ to ν.\nAs is proposed by Benamou & Brenier (2000), for the transport cost function c(x,y) = ‖x− y‖2, the minimal transport cost is equal to the infimum of\nT ∫ Rd ∫ T 0 ρ(t,x)|v(t,x)|2dxdt (6)\namong all (ρ,v) satisfying equation (4). The optimality condition is given by\nv(t,x) = ∇φ(t,x), ∂tφ+ 1\n2 |∇φ|2 = 0. (7)\nIn other words, the optimal velocity field is actually induced from a flow with time-dependent potential φ(t,x). The use of this formulation is well-known in optimal transport community (Trigila & Tabak, 2016; Peyré et al., 2019). In this paper we integrate this formulation in the deep generative models. Instead of solving Monge’s problem and find the exact L2 optimal transport map, which is unrealistic due to the limited families of neural network functions as well as the errors arising from training the neural networks, our goal is to regularize the generators in a wide range of generative models, so that the generator maps could approximate the L2 optimal transport map at least in low dimensional problems. The maps would also be endowed with the characteristics of “proximity” so that we can apply them to engineering problems." }, { "heading": "3.2 POTENTIAL FLOW GENERATOR", "text": "The potential φ(t,x) is the key function to estimate, since the velocity field could be obtained by taking the gradient of the potential and consequently the transport map could be obtained from Equation 5. There are two strategies to use neural networks to represent φ. One can take advantage of the fact that the time-dependent potential field φ is actually uniquely determined by its initial condition from Equation 7, and use a neural network to represent the initial condition of φ, i.e. φ(0,x), while approximating φ(t,x) via time discretization schemes. Alternatively, one can use a neural network to represent φ(t,x) directly and later apply the PDE regularity for φ(t,x) in Equation 7. We name the generators defined in the above two approaches as discrete potential flow generator and continuous potential flow generator, respectively, and give a detailed formulation as follows." }, { "heading": "3.2.1 DISCRETE POTENTIAL FLOW GENERATOR", "text": "In the discrete potential flow generator, we use the neural network φ̃0(x) : Rd → R to represent the initial condition of φ(t,x), i.e. φ(0,x). The potential field φ(t,x) as well as the velocity field v(t,x) could then be approximated by different time discretization schemes. As an example, here we use the first-order forward Eular scheme for the simplicity of implementation. To be specific, suppose the time discretization step is ∆t and the number of total steps is n with n∆t = T , then for i = 0, 1...n, φ(i∆t,x) could be represented by φ̃i(x), where\nφ̃i+1(x) = φ̃i(x)− ∆t\n2 |∇φ̃i(x)|2, for i = 0, 1, 2..., n− 1. (8)\nConsequently, the velocity field v(i∆t,x) could be represented by ṽi(x), where\nṽi(x) = ∇φ̃i(x), for i = 0, 1...n. (9)\nFinally, we can build the transport map from Equation 5:\nũ0(x) = x,\nũi+1(x) = ũi(x)+∆tṽi(ũi(x)), for i = 0, 1, 2...n− 1, (10)\nwith G(·) = ũn(·) be our transport map. The discrete potential flow generator has built-in optimal transport regularity since the optimal condition (Equation 7) is encoded in the time discretization (Equation 8). However, such discretization also introduces nested gradients, which dramatically increases the computational cost when the number of total steps n is increased. In our tests, we found that even n = 5 is almost intractable." }, { "heading": "3.2.2 CONTINUOUS POTENTIAL FLOW GENERATOR", "text": "In the continuous potential flow generator, we use the neural network φ̃(t,x) : R × Rd → R to represent φ(t,x). Consequently, the velocity field v(t,x) can be represented by ṽ(t,x), where\nṽ(t,x) = ∇φ̃(t,x). (11)\nWith the velocity field we could estimate the transport map by solving the ODE (Equation 5) using any numerical ODE solver. As an example, we can use the first-order forward Eular scheme, i.e.\nũ(0,x) = x,\nũ((i+ 1)∆t,x) = ũ(i∆t,x)+∆tṽ(i∆t, ũ(i∆t,x)), for i = 0, 1, 2...n− 1, (12)\nwith G(·) = ũ(T, ·) be the transport map, where ∆t is the time discretization step and n is the number of total steps with n∆t = T .\nIn the continuous potential flow generator, increasing the number of total steps would not introduce high order differentiations, therefore we could have large n, for a better precision of the ODE solver. Different from the discrete potential flow generator, the optimal condition (Equation 7) is not encoded in the continuous potential flow generator, therefore we need to penalize Equation 7 in the loss function, as we will discuss in the next subsection.\nOne may come up with another strategy of imposing the L2 optimal transport regularity: to use a vanilla generator, which is a neural network directly mapping from inputs to outputs, and penalize the L2 transport cost, i.e., the loss function is\nLvanilla = Loriginal + αEx∼µ‖G(x)− x‖2, (13)\nwhere Loriginal is the original loss function for the generator, and α is the weight for the transport penalty. We emphasize that such strategy is much inferior to penalizing Equation 7 in the continuous potential flow generator. When training the vanilla generator with L2 transport penalty, no matter how we weight the L2 transport cost penalty, we always have to make a trade off between “matching the generated distribution with the target one” and “reducing the transport cost” since there is always a conflict between them, and consequently G#µ will be biased towards µ. On the other hand, there is no conflict between matching the distributions and penalizing Equation 7 in the continuous potential flow generator. As a consequence, the continuous potential flow generator is robust with respect to different weights for the PDE penalty. We will show this in Section 4." }, { "heading": "3.3 TRAINING THE POTENTIAL FLOW GENERATOR", "text": "While the optimal condition (Equation 7) has been considered in the above two generators, the constraints of initial and final conditions have so far been neglected. However, the constraint of initial and final conditions provides the principle to train the neural network: we need to tune the parameter in the neural network φ̃ so that G#µ matches ν. This could be done in the fashion of GANs or normalizing flow models." }, { "heading": "3.3.1 LOSS IN GAN MODELS", "text": "For the discrete potential flow generator, since the optimal transport regularity is already built in, the loss for training G is simply the GAN loss for the generator, i.e.\nLD−PFG = LGAN , (14) where LGAN actually depends on the specific version of GANs. For example, if we use WGAN-GP, then LGAN = −Ez∼µD(G(z)), where D is the discriminator neural network.\nFor the continuous potential flow generator, as mentioned above, we also need to make φ̃ satisify Equation 7 for the optimal transport regularity. Inspired by the applications of neural networks in solving forward and backward problems of PDEs (Lagaris et al., 1998; Raissi et al., 2017a;b; Sirignano & Spiliopoulos, 2018), we penalize the squared residual of the PDE on the so-called “residual” points. In particular, the loss for continuous potential flow generator would be\nLC−PFG = LGAN + λ 1\nN N∑ i=1 [∂tφ̃(ti,xi) + 1 2 |∇φ̃(ti,xi)|2]2, (15)\nwhere {(ti,xi)}Ni=1 are the residual points for estimating the residual of the PDE (Equation 7), and λ is the weight for the PDE penalty. In this paper we set them as the points on “trajectories” of input samples, i.e.\n{(ti,xi)}Ni=1 = n⋃ i=0 ⋃ xj∈B {(i∆t, ũ(i∆t,xj))}, (16)\nwhere B is the set of batch samples from µ. Note that the coordinates of the residual points involve ũ, but this should not be taken into consideration when calculating the gradient of the loss function with respect to the generator parameters.\nWe point out that the residual points should cover the whole spatial-temporal domain. Theoretically, only penalizing the squared residual of the PDE on “trajectories” could lead to failure in approximating the L2 optimal transport map. However, in our numerical experiments, this flawed sampling strategy still works. As an improvement, in each training iteration we can perturb the trajectory points with Gaussian noise in space and uniform noise in time as residual points, so that in principle they are sampled from the whole spatial-temporal domain." }, { "heading": "3.3.2 LOSS IN NORMALIZING FLOW MODELS", "text": "Our continuous potential flow generator could be viewed as a further development of Neural ODE (Chen et al., 2018) applied to generative models, i.e. the continuous normalizing flow. The difference mainly lies in that we set the velocity as the gradient of a time-dependent potential function, and an augmented PDE loss is required for the generator. While both density matching and maximum likelihood training could be applied, here we take the latter as an example: we assume that the density of µ and samples from ν are available, and we maximize Ey∼ν [log pG#µ(y)], where pG#µ is the density of G#µ. Then, the loss for the continuous potential flow generator would be:\nLC−PFG = −Ey∼ν [log pG#µ(y)] + λ 1\nN N∑ i=1 [∂tφ̃(ti,xi) + 1 2 |∇φ̃(ti,xi)|2]2, (17)\nwhere as in the GAN model, {(ti,xi)}Ni=1 are the residual points for estimating the residual of PDE (Equation 7), and λ is the weight for the PDE penalty. Ey∼ν [log pG#µ(y)] could be estimated via the approach introduced in Chen et al. (2018) and we give a detailed description in Appendix B.\nThe discrete potential flow cannot be trivially applied in normalizing flow models since we found that the time step size is too large to calculate the density accurately." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 2D PROBLEMS", "text": "In this subsection, we apply the potential flow generators to several two-dimensional problems. We mainly study the following two problems where we know analytical solutions for the optimal transport maps. In problem 1 we assume that both µ and ν are Gaussian distributions with µ = N ([0; 0], [0.25, 0; 0, 1]) and ν = N ([0; 0], [1, 0; 0, 0.25]). In this case the optimal transport map is f((x, y)) = (2x, 0.5y). In problem 2 we assume that µ and ν are concentrated on concentric rings. In polar coordinates, suppose µ has (r, θ) uniformly distributed on [0.5, 1]× [0, 2π), while ν has (r, θ) uniformly distributed on [2, 2.5]× [0, 2π), where r and θ are radius and angular, respectively. In this case the optimal transport map is f((r, θ)) = (r+ 1.5, θ) in polar coordinates. We present the proofs in Appendix A. Samples from µ and ν as well as the optimal transport map in both problems are illustrated in Figure 2. We prepared 40000 samples for the input distribution and target distribution in each problem as training data for Section 4.1.1 and 4.1.2, and 1000 samples for Section 4.1.3." }, { "heading": "4.1.1 VANILLA GENERATOR VERSUS POTENTIAL FLOW GENERATOR", "text": "For the above two problems we compare the following methods: (a) vanilla generator with L2 transport cost penalty, i.e., using the loss function in Equation 13 with GAN loss as Loriginal, (b) discrete potential flow generator, and (c) continuous potential flow generator with PDE penalty. For the vanilla generator and the continuous potential flow generator, we test different weights for the penalty in order to compare the influence of penalty weights in both generators. As for the GAN loss for generators we use the sliced Wasserstein distance1, due to its relatively low computational cost, robustness, and clear mathematical interpretation in low dimensional problems (Deshpande et al., 2018). In Figure 2 we illustrate the maps of different generators. A more systematic and quantitative comparison from three independent runs for each case is provided in Table 1, where the best results are marked as bold. The statistics come from 100,000 testing data.\n1Strictly speaking, there is no “adversarial” training when we use sliced Wasserstein loss since the distance is estimated explicitly rather than represented by an other neural network. However, the idea of computing the distance between fake data and real data coincides with other GANs, especially WGANs. Therefore, in this paper we view the sliced Wasserstein distance as a special version of GAN loss.\nAs we already mentioned, vanilla generators with L2 transport penalty would make G#µ biased towards µ to reduce the transport cost from µ to G#µ. This is clearly shown in both problems with the penalty weight α = 0.1. In fact, we observed more significant biases with larger penalty weights. For the cases with smaller penalty weights α = 0.01, 0.001, in some of the runs, while G#µ are close to ν, the maps of generators are far from the optimal ones, which shows that the L2 transport penalty cannot provide sufficient regularity if the penalty weight is too small. These numerical results are consistent with our earlier discussion about the intrinsic limitation of the L2 transport penalty.\nOn the other hand, the potential flow generators give better matching between G#µ and ν, as well as smaller errors between the estimated transport maps and the analytical optimal transport maps. Notably, in both problems the continuous potential flow generators give good results with a wide range of PDE penalty weights ranging from 0.1 to 10, which shows the superiority of PDE penalty in the continuous potential flow generators compared with the transport penalty in vanilla generators. We also report that while in the first problem the discrete potential flow generator achieves a comparable result with the continuous potential flow generators, in the second problem we encountered “NAN” problems during training of the discrete potential flow generator in some of the runs. This indicates that the discrete potential flow generator is not as robust as the continuous one, which could be attributed to the high order differentiations and small total time steps n in the discrete potential flow generators." }, { "heading": "4.1.2 CYCLEGAN VERSUS POTENTIAL FLOW GENERATOR", "text": "We also apply CycleGAN on the above two problems, with different random seeds. Here, we use feedforward networks as generators and discriminators, with WGAN-GP for the GAN loss function, and L1 loss for cycle-consistency loss with weight 5. In Figure 2 we illustrate the maps of G (red arrows) and F (blue arrows), i.e. the two generators in CycleGAN. The red and blue arrows overlap with opposite directions, which indicates that G and F are approximately the inverse map of each other, as we expected from the cycle-consistency loss. However, the maps are totally different in the three runs with different random seeds, which agrees with our discussion in Section 1 that the generator pair in CycleGAN is not unique. Moreover, the generator maps are less “regular” than the maps from the potential flow generator. Specifically, we can hardly interpret the generator maps given by CycleGAN." }, { "heading": "4.1.3 DISCRETE REGULARIZED OPTIMAL TRANSPORT SOLVER VERSUS POTENTIAL FLOW GENERATOR", "text": "Finally, we compare the continuous potential flow generator in SWG and WGAN-GP with the discrete regularized optimal transport solver2 introduced by Seguy et al. (2017) on the above two problems. The results of the output distributions, as well as the errors between estimated transport maps and the\n2We used the code from https://github.com/vivienseguy/Large-Scale-OT, with the author’s permission. All the setups are kept as default, except the training data and batch size. Here we use the same training dataset of size 1000 for all the methods, which is of the same magnitude as their original training dataset in the test code. Their default batch size is 50. Note that their solver is actually looking for the regularized optimal transport, but when the weight for regularization is small, e.g. 0.02 in their code, we expect the results to be close to the exact optimal transport.\nanalytical optimal transport maps are shown in Table 2, where the best results for different batch size setups are marked as bold. The statistics come from 100,000 testing data.\nWe first set the batch size as 50; in this case, the errors of maps are similar for all three methods, while the output distributions of the continuous potential flow generator in WGAN-GP match the best with the target ones. As is well known, the gradients of the sample Wasserstein loss are biased (Bellemare et al., 2017), thus the output distributions are biased for GANs based on the Wasserstein loss. This problem could be serious when the batch size is small. Therefore, we increased the batch size to 1000. In this case, we encounterred “NAN” problems when learning the barycentric mapping in the discrete regularized OT solver. SWG and WGAN-GP with continuous potential flow generators are stable, and we can see an improvement in the output distributions and error of maps, as we expected." }, { "heading": "4.1.4 MORE PROBLEMS IN GAN MODEL AND NORMALIZING FLOW MODEL", "text": "Apart from the previous two problems, we also apply WGAN-GP and continuous normalizing flow models with continuous potential flow generators to several more complicated distributions. The results are illustrated in Figure 3. We can see the match between G#µ and ν in each of the problems, as well as that the samples of µ tend to be mapped to nearby positions. This shows the effectiveness of the continuous potential flow generator in various generative models, as well as the characteristics of “proximity” in the potential flow generator maps due to the L2 optimal transport regularity." }, { "heading": "4.2 IMAGE TRANSLATION TASKS", "text": "In this section, we aim to show the capability of continuous potential flow generator in dealing with high dimensional problems, and also to show its advantage in image translations tasks with unpaired training data. We use WGAN-GP for the GAN loss. Before feeding the images into the generators,\nwe embed them into a Euclidean space, where the L2 distances between embedding vectors should quantify the similarities between images. In this paper we apply the principal component analysis (PCA) (Jolliffe, 2011), a simple but highly interpretable approach to conduct the image embedding." }, { "heading": "4.2.1 THE MNIST DATASET", "text": "should be close in the L2 distance." }, { "heading": "4.2.2 THE CELEBA DATASET", "text": "Original\nProjection\nC-PFG\n(ours)\nVanilla\nCycleGAN\nFNN, L1\nCycleGAN\nFNN, L2\nCycleGAN\nOfficial\nCycleGAN\nFNN, L1\nCycleGAN\nFNN, L2\nCycleGAN\nOfficial\nFigure 5: Comparison between our method, vanilla WGAN-GP, CycleGAN with WGAN-GP as GAN loss and L1 or L2 loss as cycle-consistency loss, as well as official CycleGAN. The first two rows are the original images and their projections on the 700-dimensional Euclidean space induced by PCA, which are similar. The next five rows are the corresponding outputs. The last three rows are the reconstructed images from different CycleGANs.\nIn this section we test the translation task between the CelebA images (Liu et al., 2015). We randomly pick 60000 images from the CelebA training dataset and divide them into two clusters: (a) images with attribute “smiling” labeled as false, and (b) images with attribute “smiling” labeled as true. The images are cropped so that only faces remain on the images. We view the two clusters as samples of µ and ν, respectively. The total number of components in PCA is set to 700. Note that our goal is to generate images of smiling faces belonging to the same people as in input images. The difficulty lies in that the training data are unpaired and actually we do not have the ground truth as a reference.\nWe compare our method against vanilla WGAN-GP, and CycleGAN with WGAN-GP for GAN loss. In CycleGAN we use L1 or L2 loss as cycle-consistency loss, with weight = 10 as in Zhu et al. (2017). To make a fair comparison, for each generator and discriminator, we use a feedforward neural network (FNN) with the same size of hidden layers (5× 256). We also test the official CycleGAN3 on this problem, where we feed the cropped face images (with resizing but without random jitter cropping) instead of embedding vectors into the model. The results of images randomly picked from the test dataset are shown in Figure 5.\nFor most of the images, our method could successfully translate the no-smiling faces to smiling faces belonging to the same people. Some of the output images are blurred, since it is difficult to learn the high order modes of PCA with FNN. Vanilla WGAN-GP and CycleGAN with FNN totally failed, in that the input and output images come from different people. This comparison clearly showed the necessity of additional regularity for the generators in translation tasks with only unpaired data, and that GAN loss + cycle-consistency loss cannot provide sufficient regularity.\nThe official CycleGAN is decent in performance, generating images less blurred than our method, but failed to change the countenance for some images. Note that the number of parameters in official CycleGAN is about 30 times more than that in our method, and the total training time is more than two times of our method on a single NVIDIA Tesla V100 GPU." }, { "heading": "5 DISCUSSION AND CONCLUSIONS", "text": "In this paper we propose potential flow generators with L2 optimal transport regularity as plug-andplay generator modules that could be easiy integrated in a wide range of generative models. In particular, we propose two versions: the discrete one and the continuous one. For the discrete version, the L2 optimal transport regularity is directly encoded in, while for the continuous version we only need a slight augmentation to the original generator loss functions to impose the L2 optimal transport regularity.\nWe firstly show that the potential flow generators are able to approximate the L2 optimal transport maps in 2D problems. The continuous potential flow generator outperforms the discrete one in robustness. The continuous potential flow generator is also applied to WGAN-GP and continuous normalizing flow models, where we illustrate the characteristic of “proximity” for the potential flow generator due to the L2 optimal transport regularity. Consequently, we show the effectiveness of our method in image translation tasks using unpaired training data from the MNIST dataset and the CelebA dataset. We can see that our method significantly outperforms the vanilla WGAN-GP and CycleGAN using FNN with the same size of hidden layers.\nWe think that the results of our method in translation tasks are impressive considering that we only use PCA, a linear embedding method, with only feedforward neural networks. Such a naive strategy actually leads to the blurred patterns in the output images, which is also the case (even more severe) for vanilla WGAN-GP and CycleGAN using the same strategy. A possible improvement is to integrate the potential flow generator with other embedding techniques like autoencoders with convolutional neural networks. Apart from image-to-image translations, it is also possible to apply the potential flow generator to other translation tasks, if the translation objects could be properly embedded into Euclidean space. Moreover, it was perceived that the training of ODE-based model is slow, but the training of our method could be accelerated by applying methods related to optimal transport, e.g. the Wasserstein natural gradient method (Tong Lin et al., 2018; Li & Montúfar, 2018). We leave these possible improvements to future work.\n3We used the code from https://www.tensorflow.org/tutorials/generative/cyclegan. The size of training dataset is reduced to 1000 for official CycleGAN, which is similar to the horse-to-zebra dataset. The number of total epochs is set back to 200 as in the original CycleGAN paper." }, { "heading": "A PROOF OF THE OPTIMAL MAPS IN SECTION 4.1", "text": "In problem 1, for Gaussian distributions µ and ν with mean m1 and m2, as well as covariance matrices Σ1 and Σ2, from Gelbrich (1990) we know that the minimum transport cost from µ to ν with cost function c(x,y) = ‖x− y‖2 is\n‖m1 −m2‖2 + Tr(Σ1 + Σ2 − 2(Σ1/21 Σ2Σ 1/2 1 ) 1/2), (18)\nwhich is known as the squared Wasserstein-2 distance between two Gaussian distributions. In particular, the minimum transport cost is 0.5 in our problem.\nFor the map f((x, y)) = (2x, 0.5y), f#µ is Gaussian since f is linear. By checking the mean and covariance we have f#µ = ν. Also, the transport map of f is\nEx∼N (0,0.25)(2x− x)2 + Ey∼N (0,1)(0.5y − y)2 = 0.25 + 0.25 = 0.5 (19)\nwhich is exactly the minimum transport cost, thus f is the optimal transport map. We complete the proof of the optimal transport map in problem 1.\nIn problem 2, denote X = [0.5, 1], Y = [2, 2.5], O = [0, 2π), and m1 = U(X), m2 = U(Y), mθ = U(O), where we use U(A) to represent the uniform probability measure on set A. For f((r, θ)) = (r + 1.5, θ), µ = U(X) × U(O), ν = U(Y) × U(O), we have f#µ = ν. For any transport map from µ to ν, denote as g((r, θ)) = (gr(r, θ), gθ(r, θ)) in polar coordinates, we only need to show that the transport cost of g is no less than the cost of f .\nLet h((r, θ)) = (gr(r, θ), θ), then the transport cost of g is no less than the cost of h since the transport cost is reduced for any point (r, θ).4\nActually we could view gr(r, θ) as a multivalued function in r so that gr(r, θ) induces a transport plan H from m1 to m2. More formally, define measure H : B(X× Y)→ R by\nH(A) = ∫ X ∫ Y 1(x,y)∈AM(x, dy)dm1(x) = ∫ A M(x, dy)dm1(x), (20)\nwhere M(·, ·) : X× B(Y)→ R is defined by M(x,A) = ∫ O 1gr(x,θ)∈Admθ(θ). (21)\nTo see H is a transport plan from m1 to m2, we need to check:\n1. ∀A ∈ B(X), H(A× Y) = m1(A). This is true since\nH(A× Y) = ∫ A ∫ Y M(x, dy)dm1(x) = ∫ A M(x,Y)dm1(x) = m1(A), (22)\nwhere we utilize that M(x,Y) = 1. 2. ∀A ∈ B(Y), H(X× A) = m2(A).\nNote that g#µ = ν, thus µ(g−1(A×O)) = ν(A×O) = m2(A). Also, µ(g−1(A×O)) = ∫ X ∫ O 1g(x,θ)∈A×Odmθ(θ)dm1(x)\n= ∫ X ∫ O 1gr(x,θ)∈Admθ(θ)dm1(x)\n= ∫ X M(x,A)dm1(x) = H(X× A).\n(23)\nTherefore H(X× A) = m2(A). 4It’s not necessary that h#µ = ν.\nWe also claim that the L2 transport cost of h equals to that of H . The transport cost of h and H are\nC(h) = ∫ X ∫ O (gr(x, θ)− x)2dmθ(θ)dm1(x),\nC(H) = ∫ X×Y (y − x)2dH = ∫ X ∫ Y (y − x)2M(x, dy)dm1(x),\n(24)\nrespectively. By the definition of M , we have M(x, ·) = gr(x, ·)#mθ for any x ∈ X, thus∫ Y (y − x)2M(x, dy) = ∫ O (gr(x, θ)− x)2dmθ(θ) (25)\nfor any x ∈ X . Therefore C(h) = C(H). Let F (x) = x + 1.5 be another transport plan from m1 to m2, clearly the L2 transport cost of f equals to that of F . Note that the transport cost of H is no less than that of F , since the latter one is the optimal transport plan from m1 to m2. This complete the proof of claim that the transport cost of g is no less than that of f , and thus the proof of the optimal transport map in problem 2." }, { "heading": "B DETAILS OF LOSS FUNCTIONS IN CONTINUOUS NORMALIZING FLOWS", "text": "To estimate log pG#µ(y), we have the ODE that connects the probability density at inputs and outputs of the generator:\nd dt log(p(ũ(t,x))) = −∇ũ · ṽ(t, ũ(t,x)) = −∆ũφ̃(t, ũ(t,x)), (26)\nfor all x in the support of µ, where the initial probability density p(ũ(0,x)) = pµ(x) is the density of µ at input x, while the terminal probability density p(ũ(T,x)) = pG#µ(G(x)) is the density of G#µ at output G(x).\nAlso, we estimate x = G−1(y) by solving the ODE\ndw\ndt = −ṽ(T − t,w) (27)\nwith initial condition w(0) as y = G(x) and w(T ) as the corresponding x = G−1(y).\nFor each y, we can use Equation 27 to estimate the corresponding x = G−1(y), and consequently log(pµ(x)) since we have the density of µ. Then we apply Equation 26 to estimate log pG#µ(y). By sampling y ∼ ν, we can estimate Ey∼ν [log pG#µ(y)]. In practice, we also need to discretize Equations 26 and 27 properly. For example, we use the first-order Euler scheme in our practice. Note that when applying maximum likelihood training, the density of µ could be unnormalized, since multiplications with pµ would merely lead to a constant difference in the loss function." }, { "heading": "C NETWORKS AND HYPERPARAMETERS", "text": "Except in the regulized discrete optimal transport solver and official CycleGAN, all the neural networks are feedforward neural networks of 5 hidden layers, each of width 128 for 2D problems, or 256 in image translation tasks. For potential flow generator, the activation function is tanh for the smoothness of φ̃. The activation function in vanilla generator in Figure 2 is also tanh. All the other activation functions are leaky ReLU (Maas et al., 2013) with negative slope 0.2, except in the regulized discrete optimal transport solver and official CycleGAN.\nThe batch size is set as 1000 for all the cases except in Table 2 and in official CycleGAN. We use 1000 random projection directions to estimate the sliced Wasserstein distances. In WGAN-GP model the coefficient for gradient penalty is 0.1, and we do 5 discriminator updates per generator update.\nIn potential flow generators, the time span T is 1.0. We set the number of total time steps n = 4 in discrete potential flow generators, while n = 100 in continuous potential flow generators for 2D problems and n = 10 in image translation tasks. The PDE penalty weight λ for continuous potential\nflow generator is set as 1.0 by default, except those in the 2D problems where we compare different generators.\nWe use the Adam optimizer (Kingma & Ba, 2014) for all the problems. In the 2D problems, the learning rate is set as l = 10−5, β1 = 0.5, β2 = 0.999 for sliced Wasserstein distance, while set as l = 10−5, β1 = 0.5, β2 = 0.9 in WGAN-GP. We train the generators for 100, 000 iterations in Figure 2 and Figure 3a, and 20, 000 iterations in Table 2. In the normalizing flow model we set l = 10−4, β1 = 0.9, β2 = 0.999, and train the generator for 10, 000 iterations. In image translation tasks we set l = 10−4, β1 = 0.5, β2 = 0.9, and train the generator for 100, 000 iterations for our method and CycleGANs with FNN, while 200, 000 iterations for vanilla GAN.\nD MORE RESULTS ON THE MNSIT AND CELEBA DATASET" } ]
2,019
null
SP:927a1f8069c0347c4d0a8b1b947533f1c508ba42
[ "The main claim of this paper is that a simple strategy of randomization plus fast gradient sign method (FGSM) adversarial training yields robust neural networks. This is somewhat surprising because previous works indicate that FGSM is not a powerful attack compared to iterative versions of it like projected gradient descent (PGD), and it has not been shown before that models trained on FGSM can defend against PGD attacks. Judging from the results in the paper alone, there are some issues with the experiment results that could be due to bugs or other unexplained experiment settings. ", "The authors claimed a classic adversarial training method, FGSM with random start, can indeed train a model that is robust to strong PGD attacks. Moreover, when it is combined with some fast training methods, such as cyclic learning rate scheduling and mixed precision, the adversarial training time can be significantly decreased. The experiment verifies the authors' claim convincingly." ]
Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with = 8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at = 2/255 in 12 hours, in comparison to past work based on “free” adversarial training which took 10 and 50 hours to reach the same respective thresholds. Finally, we identify a failure mode referred to as “catastrophic overfitting” which may have caused previous attempts to use FGSM adversarial training to fail. All code for reproducing the experiments in this paper as well as pretrained model weights are at https://github.com/locuslab/fast_adversarial.
[ { "affiliations": [], "name": "Eric Wong" }, { "affiliations": [], "name": "Leslie Rice" } ]
[ { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "arXiv preprint arXiv:1707.07397,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist", "venue": null, "year": 2018 }, { "authors": [ "Nicholas Carlini" ], "title": "Is ami (attacks meet interpretability) robust to adversarial examples", "venue": "arXiv preprint arXiv:1902.02322,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Defensive distillation is not robust to adversarial examples", "venue": "arXiv preprint arXiv:1607.04311,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Cody Coleman", "Deepak Narayanan", "Daniel Kang", "Tian Zhao", "Jian Zhang", "Luigi Nardi", "Peter Bailis", "Kunle Olukotun", "Chris Ré", "Matei Zaharia" ], "title": "Dawnbench: An end-to-end deep learning benchmark and competition", "venue": null, "year": 2017 }, { "authors": [ "Francesco Croce", "Maksym Andriushchenko", "Matthias Hein" ], "title": "Provable robustness of relu networks via maximization of linear regions", "venue": "arXiv preprint arXiv:1810.07481,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Anish Athalye" ], "title": "Evaluating and understanding the robustness of adversarial logit pairing", "venue": "arXiv preprint arXiv:1807.10272,", "year": 2018 }, { "authors": [ "Reuben Feinman", "Ryan R Curtin", "Saurabh Shintre", "Andrew B Gardner" ], "title": "Detecting adversarial samples from artifacts", "venue": "arXiv preprint arXiv:1703.00410,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "arXiv preprint arXiv:1711.00117,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing", "venue": "arXiv preprint arXiv:1803.06373,", "year": 2018 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L Dill", "Kyle Julian", "Mykel J Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Jiajun Lu", "Hussein Sibai", "Evan Fabry", "David Forsyth" ], "title": "No need to worry about adversarial examples in object detection in autonomous vehicles", "venue": "arXiv preprint arXiv:1707.03501,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "arXiv preprint arXiv:1909.04068,", "year": 2019 }, { "authors": [ "Jan Hendrik Metzen", "Tim Genewein", "Volker Fischer", "Bastian Bischoff" ], "title": "On detecting adversarial perturbations", "venue": "arXiv preprint arXiv:1702.04267,", "year": 2017 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh" ], "title": "Mixed precision training", "venue": "arXiv preprint arXiv:1710.03740,", "year": 2017 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marius Mosbach", "Maksym Andriushchenko", "Thomas Trost", "Matthias Hein", "Dietrich Klakow" ], "title": "Logit pairing methods can fool gradient-based attacks", "venue": null, "year": 1810 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hadi Salman", "Greg Yang", "Jerry Li", "Pengchuan Zhang", "Huan Zhang", "Ilya Razenshteyn", "Sebastien Bubeck" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": null, "year": 1906 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": null, "year": 1904 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "arXiv preprint arXiv:1710.10571,", "year": 2017 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of residual networks using large learning", "venue": null, "year": 2018 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Guanhong Tao", "Shiqing Ma", "Yingqi Liu", "Xiangyu Zhang" ], "title": "Attacks meet interpretability: Attribute-steered detection of adversarial samples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vincent Tjeng", "Kai Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "arXiv preprint arXiv:1711.07356,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "arXiv preprint arXiv:1904.13000,", "year": 2019 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Aaron van den Oord", "Pushmeet Kohli" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "arXiv preprint arXiv:1802.05666,", "year": 2018 }, { "authors": [ "Jianyu Wang" ], "title": "Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks", "venue": "arXiv preprint arXiv:1811.10716,", "year": 2018 }, { "authors": [ "Eric Wong", "J Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "arXiv preprint arXiv:1711.00851,", "year": 2017 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kai Y Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry" ], "title": "Training for faster adversarial robustness verification via inducing relu stability", "venue": "arXiv preprint arXiv:1809.03008,", "year": 2018 }, { "authors": [ "Yuzhe Yang", "Guo Zhang", "Dina Katabi", "Zhi Xu" ], "title": "Me-net: Towards effective adversarial robustness with matrix estimation", "venue": "arXiv preprint arXiv:1905.11971,", "year": 2019 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Painless adversarial training using maximal principle", "venue": null, "year": 1905 }, { "authors": [ "Tramèr" ], "title": "A A DIRECT COMPARISON TO R+FGSM FROM TRAMÈR ET AL. (2017) While a randomized version of FGSM adversarial training was proposed by Tramèr et al. (2017), it was not shown to be as effective as adversarial training against a PGD adversary", "venue": null, "year": 2017 }, { "authors": [ "In comparison", "Tramèr" ], "title": "2017) instead initialize on the surface of a hypercube with", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although deep network architectures continue to be successful in a wide range of applications, the problem of learning robust deep networks remains an active area of research. In particular, safety and security focused applications are concerned about robustness to adversarial examples, data points which have been adversarially perturbed to fool a model (Szegedy et al., 2013). The goal here is to learn a model which is not only accurate on the data, but also accurate on adversarially perturbed versions of the data. To this end, a number of defenses have been proposed to mitigate the problem and improve the robustness of deep networks, with some of the most reliable being certified defenses and adversarial training. However, both of these approaches come at a non-trivial, additional computational cost, often increasing training time by an order of magnitude over standard training. This has slowed progress in researching robustness in deep networks, due to the computational difficulty in scaling to much larger networks and the inability to rapidly train models when experimenting with new ideas. In response to this difficulty, there has been a recent surge in work\n∗Equal contribution.\nthat tries to to reduce the complexity of generating an adversarial example, which forms the bulk of the additional computation in adversarial training (Zhang et al., 2019; Shafahi et al., 2019). While these works present reasonable improvements to the runtime of adversarial training, they are still significantly slower than standard training, which has been greatly accelerated due to competitions for optimizing both the speed and cost of training (Coleman et al., 2017).\nIn this work, we argue that adversarial training, in fact, is not as hard as has been suggested by this past line of work. In particular, we revisit one of the the first proposed methods for adversarial training, using the Fast Gradient Sign Method (FGSM) to add adversarial examples to the training process (Goodfellow et al., 2014). Although this approach has long been dismissed as ineffective, we show that by simply introducing random initialization points, FGSM-based training is as effective as projected gradient descent based training while being an order of magnitude more efficient. Moreover, FGSM adversarial training (and to a lesser extent, other adversarial training methods) can be drastically accelerated using standard techniques for efficient training of deep networks, including e.g. cyclic learning rates (Smith & Topin, 2018), mixed-precision training (Micikevicius et al., 2017), and other similar techniques. The method has extremely few free parameters to tune, and can be easily adapted to most training procedures. We further identify a failure mode that we call “catastrophic overfitting”, which may have caused previous attempts at FGSM adversarial training to fail against PGD-based attacks.\nThe end result is that, with these approaches, we are able to train (empirically) robust classifiers far faster than in previous work. Specifically, we train an `∞ robust CIFAR10 model to 45% accuracy at = 8/255 (the same level attained in previous work) in 6 minutes; previous papers reported times of 80 hours for PGD-based training (Madry et al., 2017) and 10 hours for the more recent “free” adversarial training method (Shafahi et al., 2019). Similarly, we train an `∞ robust ImageNet classifier to 43% top-1 accuracy at = 2/255 (again matching previous results) in 12 hours of training (compared to 50 hours in the best reported previous work that we are aware of (Shafahi et al., 2019)). Both of these times roughly match the comparable time for quickly training a standard non-robust model to reasonable accuracy. We extensively evaluate these results against strong PGDbased attacks, and show that they obtain the same empirical performance as the slower, PGD-based training. Thus, we argue that despite the conventional wisdom, adversarially robust training is not actually more challenging than standard training of deep networks, and can be accomplished with the notoriously weak FGSM attack." }, { "heading": "2 RELATED WORK", "text": "After the discovery of adversarial examples by Szegedy et al. (2013), Goodfellow et al. (2014) proposed the Fast Gradient Sign Method (FGSM) to generate adversarial examples with a single gradient step. This method was used to perturb the inputs to the model before performing backpropagation as an early form of adversarial training. This attack was enhanced by adding a randomization step, which was referred to as R+FGSM (Tramèr et al., 2017). Later, the Basic Iterative Method improved upon FGSM by taking multiple, smaller FGSM steps, ultimately rendering both FGSM-based adversarial training ineffective (Kurakin et al., 2016). This iterative adversarial attack was further strengthened by adding multiple random restarts, and was also incorporated into the adversarial training procedure. These improvements form the basis of what is widely understood today as adversarial training against a projected gradient descent (PGD) adversary, and the resulting method is recognized as an effective approach to learning robust networks (Madry et al., 2017). Since then, the PGD attack and its corresponding adversarial training defense have been augmented with various techniques, such as optimization tricks like momentum to improve the adversary (Dong et al., 2018), combination with other heuristic defenses like matrix estimation (Yang et al., 2019) or logit pairing (Mosbach et al., 2018), and generalization to multiple types of adversarial attacks (Tramèr & Boneh, 2019; Maini et al., 2019).\nIn addition to adversarial training, a number of other defenses against adversarial attacks have also been proposed. Adversarial defenses span a wide range of methods, such as preprocessing techniques (Guo et al., 2017; Buckman et al., 2018; Song et al., 2017), detection algorithms (Metzen et al., 2017; Feinman et al., 2017; Carlini & Wagner, 2017a), verification and provable defenses (Katz et al., 2017; Sinha et al., 2017; Wong & Kolter, 2017; Raghunathan et al., 2018), and various theoretically motivated heuristics (Xiao et al., 2018; Croce et al., 2018). While certified defenses\nhave been scaled to reasonably sized networks (Wong et al., 2018; Mirman et al., 2018; Gowal et al., 2018; Cohen et al., 2019; Salman et al., 2019), the guarantees don’t match the empirical robustness obtained through adversarial training.\nWith the proposal of many new defense mechanisms, of great concern in the community is the use of strong attacks for evaluating robustness: weak attacks can give a misleading sense of security, and the history of adversarial examples is littered with adversarial defenses (Papernot et al., 2016; Lu et al., 2017; Kannan et al., 2018; Tao et al., 2018) which were ultimately defeated by stronger attacks (Carlini & Wagner, 2016; 2017b; Athalye et al., 2017; Engstrom et al., 2018; Carlini, 2019). This highlights the difficulty of evaluating adversarial robustness, as pointed out by other work which began to defeat proposed defenses en masse (Uesato et al., 2018; Athalye et al., 2018). Since then, several best practices have been proposed to mitigate this problem (Carlini et al., 2019).\nDespite the eventual defeat of other adversarial defenses, adversarial training with a PGD adversary remains empirically robust to this day. However, running a strong PGD adversary within an inner loop of training is expensive, and some earlier work in this topic found that taking larger but fewer steps did not always significantly change the resulting robustness of a network (Wang, 2018). To combat the increased computational overhead of the PGD defense, some recent work has looked at regressing the k-step PGD adversary to a variation of its single-step FGSM predecessor called “free” adversarial training, which can be computed with little overhead over standard training by using a single backwards pass to simultaneously update both the model weights and also the input perturbation (Shafahi et al., 2019). Finally, when performing a multi-step PGD adversary, it is possible to cut out redundant calculations during backpropagation when computing adversarial examples for additional speedup (Zhang et al., 2019).\nAlthough these improvements are certainly faster than the standard adversarial training procedure, they are not much faster than traditional training methods, and can still take hours to days to compute. On the other hand, top performing training methods from the DAWNBench competition (Coleman et al., 2017) are able to train CIFAR10 and ImageNet architectures to standard benchmark metrics in mere minutes and hours respectively, using only a modest amount of computational resources. Although some of the techniques can be quite problem specific for achieving bleedingedge performance, more general techniques such as cyclic learning rates (Smith & Topin, 2018) and half-precision computations (Micikevicius et al., 2017) have been quite successful in the top ranking submissions, and can also be useful for adversarial training." }, { "heading": "3 ADVERSARIAL TRAINING OVERVIEW", "text": "Adversarial training is a method for learning networks which are robust to adversarial attacks. Given a network fθ parameterized by θ, a dataset (xi, yi), a loss function ` and a threat model ∆, the learning problem is typically cast as the following robust optimization problem,\nmin θ ∑ i max δ∈∆ `(fθ(xi + δ), yi). (1)\nA typical choice for a threat model is to take ∆ = {δ : ‖δ‖∞ ≤ } for some > 0. This is the `∞ threat model used by Madry et al. (2017) and is the setting we study in this paper. The procedure for adversarial training is to use some adversarial attack to approximate the inner maximization over ∆, followed by some variation of gradient descent on the model parameters θ. For example, one of the earliest versions of adversarial training used the Fast Gradient Sign Method to approximate the inner maximization. This could be seen as a relatively inaccurate approximation of the inner maximization for `∞ perturbations, and has the following closed form (Goodfellow et al., 2014):\nδ? = · sign(∇x`(f(x), y)). (2)\nA better approximation of the inner maximization is to take multiple, smaller FGSM steps of size α instead. When the iterate leaves the threat model, it is projected back to the set ∆ (for `∞ perturbations. This is equivalent to clipping δ to the interval [− , ]). Since this is only a local approximation of a non-convex function, multiple random restarts within the threat model ∆ typically improve the approximation of the inner maximization even further. A combination of all these techniques is\nAlgorithm 1 PGD adversarial training for T epochs, given some radius , adversarial step size α and N PGD steps and a dataset of size M for a network fθ\nfor t = 1 . . . T do for i = 1 . . .M do\n// Perform PGD adversarial attack δ = 0 // or randomly initialized for j = 1 . . . N do δ = δ + α · sign(∇δ`(fθ(xi + δ), yi)) δ = max(min(δ, ),− ) end for θ = θ −∇θ`(fθ(xi + δ), yi) // Update model weights with some optimizer, e.g. SGD\nend for end for\nAlgorithm 2 “Free” adversarial training for T epochs, given some radius , N minibatch replays, and a dataset of size M for a network fθ δ = 0 // Iterate T/N times to account for minibatch replays and run for T total epochs for t = 1 . . . T/N do\nfor i = 1 . . .M do // Perform simultaneous FGSM adversarial attack and model weight updates T times for j = 1 . . . N do\n// Compute gradients for perturbation and model weights simultaneously ∇δ,∇θ = ∇`(fθ(xi + δ), yi) δ = δ + · sign(∇δ) δ = max(min(δ, ),− ) θ = θ −∇θ // Update model weights with some optimizer, e.g. SGD\nend for end for\nend for\nknown as the PGD adversary (Madry et al., 2017), and its usage in adversarial training is summarized in Algorithm 1.\nNote that the number of gradient computations here is proportional to O(MN) in a single epoch, where M is the size of the dataset and N is the number of steps taken by the PGD adversary. This is N times greater than standard training (which has O(M) gradient computations per epoch), and so adversarial training is typically N times slower than standard training." }, { "heading": "3.1 “FREE” ADVERSARIAL TRAINING", "text": "To get around this slowdown of a factor ofN , Shafahi et al. (2019) instead propose “free” adversarial training. This method takes FGSM steps with full step sizes α = followed by updating the model weights for N iterations on the same minibatch (also referred to as “minibatch replays”). The algorithm is summarized in Algorithm 2. Note that perturbations are not reset between minibatches. To account for the additional computational cost of minibatch replay, the total number of epochs is reduced by a factor of N to make the total cost equivalent to T epochs of standard training. Although “free” adversarial training is faster than the standard PGD adversarial training, it is not as fast as we’d like: Shafahi et al. (2019) need to run over 200 epochs in over 10 hours to learn a robust CIFAR10 classifier and two days to learn a robust ImageNet classifier, whereas standard training can be accomplished in minutes and hours for the same respective tasks." }, { "heading": "4 FAST ADVERSARIAL TRAINING", "text": "To speed up adversarial training and move towards the state of the art in fast standard training methods, we first highlight the main empirical contribution of the paper: that FGSM adversarial\nAlgorithm 3 FGSM adversarial training for T epochs, given some radius , N PGD steps, step size α, and a dataset of size M for a network fθ\nfor t = 1 . . . T do for i = 1 . . .M do\n// Perform FGSM adversarial attack δ = Uniform(− , ) δ = δ + α · sign(∇δ`(fθ(xi + δ), yi)) δ = max(min(δ, ),− ) θ = θ −∇θ`(fθ(xi + δ), yi) // Update model weights with some optimizer, e.g. SGD\nend for end for\ntraining combined with random initialization is just as effective a defense as PGD-based training. Following this, we discuss several techniques from the DAWNBench competition (Coleman et al., 2017) that are applicable to all adversarial training methods, which reduce the total number of epochs needed for convergence with cyclic learning rates and further speed up computations with mixedprecision arithmetic." }, { "heading": "4.1 REVISITING FGSM ADVERSARIAL TRAINING", "text": "Despite being quite similar to FGSM adversarial training, free adversarial training is empirically robust against PGD attacks whereas FGSM adversarial training is not believed to be robust. To analyze why, we identify a key difference between the methods: a property of free adversarial training is that the perturbation from the previous iteration is used as the initial starting point for the next iteration. However, there is little reason to believe that an adversarial perturbation for a previous minibatch is a reasonable starting point for the next minibatch. As a result, we hypothesize that the main benefit comes from simply starting from a non-zero initial perturbation.\nIn light of this difference, our approach is to use FGSM adversarial training with random initialization for the perturbation, as shown in Algorithm 3. We find that, in contrast to what was previously believed, this simple adjustment to FGSM adversarial training can be used as an effective defense on par with PGD adversarial training. Crucially, we find that starting from a non-zero initial perturbation is the primary driver for success, regardless of the actual initialization. In fact, both starting with the previous minibatch’s perturbation or initializing from a uniformly random perturbation al-\n1As reported by Shafahi et al. (2019) using a different network architecture and an adversary with 20 steps and 10 restarts, which is strictly weaker than the adversary used in this paper.\n2As reported by Madry et al. (2017) using a different network architecture and an adversary and an adversary with 20 steps and no restarts, which is strictly weaker than the adversary used in this paper\nlow FGSM adversarial training to succeed at being robust to full-strength PGD adversarial attacks. Note that randomized initialization for FGSM is not a new idea and was previously studied by Tramèr et al. (2017). Crucially, Tramèr et al. (2017) use a different, more restricted random initialization and step size, which does not result in models robust to full-strength PGD adversaries. A more detailed comparison of their approach with ours is in Appendix A.\nTo test the effect of initialization in FGSM adversarial training, we train several models to be robust at a radius = 8/255 on CIFAR10, starting with the most “pure” form of FGSM, which takes steps of size α = from a zero-initialized perturbation. The results, given in Table 1, are consistent with the literature, and show that the model trained with zero-initialization is not robust against a PGD adversary. However, surprisingly, simply using a random or previous-minibatch initialization instead of a zero initialization actually results in reasonable robustness levels (with random initialization performing slightly better) that are comparable to both free and PGD adversarial training methods. The adversarial accuracies in Table 1 are calculated using a PGD adversary with 50 iterations, step size α = 2/255, and 10 random restarts. Specific optimization parameters used for training these models can be found in Appendix B.\nFGSM step size Note that an FGSM step with size α = from a non-zero initialization is not guaranteed to lie on the boundary of the `∞ ball, and so this defense could potentially be seen as too weak. We find that increasing the step size by a factor of 1.25 to α = 10/255 further improved the robustness of the model so that it is on par with the best reported result from free adversarial training. However, we also found that forcing the resulting perturbation to lie on the boundary with a step size of α = 2 resulted in catastrophic overfitting: it does not produce a model robust to adversarial attacks. These two failure modes (starting from a zero-initialized perturbation and generating perturbations at the boundary) may explain why previous attempts at FGSM adversarial training failed, as the model overfits to a restricted threat model, and is described in more detail in Section 5.4. A full curve showing the effect of a range of FGSM step sizes on the robust performance can be found in Appendix C.\nComputational complexity A second key difference between FGSM and free adversarial training is that the latter uses a single backwards pass to compute gradients for both the perturbation and the model weights while repeating the same minibatch m times in a row, called “minibatch replay”. In comparison, the FGSM adversarial training does not need to repeat minibatches, but needs two backwards passes to compute gradients separately for the perturbation and the model weights. As a result, the computational complexity for an epoch of FGSM adversarial training is not truly free and is equivalent to two epochs of standard training." }, { "heading": "4.2 DAWNBENCH IMPROVEMENTS", "text": "Although free adversarial training is of comparable cost per iteration to traditional standard training methods, it is not quite comparable in total cost to more recent advancements in fast methods for standard training. Notably, top submissions to the DAWNBench competition have shown that CIFAR10 and ImageNet classifiers can be trained at significantly quicker times and at much lower cost than traditional training methods. Although some of the submissions can be quite unique in their approaches, we identify two generally applicable techniques which have a significant impact on the convergence rate and computational speed of standard training.\nCyclic learning rate Introduced by Smith (2017) for improving convergence and reducing the amount of tuning required when training networks, a cyclic schedule for a learning rate can drastically reduce the number of epochs required for training deep networks (Smith & Topin, 2018). A simple cyclic learning rate schedules the learning rate linearly from zero, to a maximum learning rate, and back down to zero (examples can be found in Figure 1). Using a cyclic learning rate allows CIFAR10 architectures to converge to benchmark accuracies in tens of epochs instead of hundreds, and is a crucial component of some of the top DAWNBench submissions.\nMixed-precision arithmetic With newer GPU architectures coming with tensor cores specifically built for rapid half-precision calculations, using mixed-precision arithmetic when training deep networks can also provide significant speedups for standard training (Micikevicius et al., 2017). This\ncan drastically reduce the memory utilization, and when tensor cores are available, also reduce runtime. In some DAWNBench submissions, switching to mixed-precision computations was key to achieving fast training while keeping costs low.\nWe adopt these two techniques for use in adversarial training, which allows us to drastically reduce the number of training epochs as well as the runtime on GPU infrastructure with tensor cores, while using modest amounts of computational resources. Notably, both of these improvements can be easily applied to existing implementations of adversarial training by adding a few lines of code with very little additional engineering effort, and so are easily accessible by the general research community." }, { "heading": "5 EXPERIMENTS", "text": "To demonstrate the effectiveness of FGSM adversarial training with fast training methods, we run a number of experiments on MNIST, CIFAR10, and ImageNet benchmarks. All CIFAR10 experiments in this paper are run on a single GeForce RTX 2080ti using the PreAct ResNet18 architecture, and all ImageNet experiments are run on a single machine with four GeForce RTX 2080tis using the ResNet50 architecture (He et al., 2016). Repositories for reproducing all experiments and the corresponding trained model weights are available at https://github.com/locuslab/fast_ adversarial.\nAll experiments using FGSM adversarial training in this section are carried out with random initial starting points and step size α = 1.25 as described in Section 4.1. All PGD adversaries used at evaluation are run with 10 random restarts for 50 iterations (with the same hyperparameters as those used by Shafahi et al. (2019) but further strengthened with random restarts). Speedup with mixedprecision was incorporated with the Apex amp package at the O1 optimization level for ImageNet experiments and O2 without loss scaling for CIFAR10 experiments.3" }, { "heading": "5.1 VERIFIED PERFORMANCE ON MNIST", "text": "Since the FGSM attack is known to be significantly weaker than the PGD attack, it is understandable if the reader is still skeptical of the true robustness of the models trained using this method. To demonstrate that FGSM adversarial training confers real robustness to the model, in addition to evaluating against a PGD adversary, we leverage mixed-integer linear programming (MILP) methods from formal verification to calculate the exact robustness of small, but verifiable models (Tjeng\n3Since CIFAR10 did not suffer from loss scaling problems, we found using the O2 optimization level without loss scaling for mixed-precision arithmetic to be slightly faster.\net al., 2017). We train two convolutional networks with 16 and 32 convolutional filters followed by a fully connected layer of 100 units, the same architecture used by Tjeng et al. (2017). We use both PGD and FGSM adversarial training at = 0.3, where the PGD adversary for training has 40 iterations with step size 0.01 as done by Madry et al. (2017). The exact verification results can be seen in Table 2, where we find that FGSM adversarial training confers empirical and verified robustness which is nearly indistinguishable to that of PGD adversarial training on MNIST.4" }, { "heading": "5.2 FAST CIFAR10", "text": "We begin our CIFAR10 experiments by combining the DAWNBench improvements from Section 4.2 with various forms of adversarial training. For N epochs, we use a cyclic learning rate that increases linearly from 0 to λ over the first N/2 epochs, then decreases linearly from λ to 0 for the remaining epochs, where λ is the maximum learning rate. For each method, we individually tune λ to be as large as possible without causing the training loss to diverge, which is the recommended learning rate test from Smith & Topin (2018).\nTo identify the minimum number of epochs needed for each adversarial training method, we repeatedly run each method over a range of maximum epochsN , and then plot the final robustness of each trained model in Figure 2. While all the adversarial training methods benefit greatly from the cyclic learning rate schedule, we find that both FGSM and PGD adversarial training require much fewer epochs than free adversarial training, and consequently reap the greatest speedups.\n4Exact verification results at = 0.3 for both the FGSM and PGD trained models are not possible since the size of the resulting MILP is too large to be solved in a reasonable amount of time. The same issue also prevents us from verifying networks trained on datasets larger than MNIST, which have to rely on empirical tests for evaluating robustness.\nUsing the minimum number of epochs needed for each training method to reach a baseline of 45% robust accuracy, we report the total training time in Table 3. We find that while all adversarial training methods benefit from the DAWNBench improvements, FGSM adversarial training is the fastest, capable of learning a robust CIFAR10 classifier in 6 minutes using only 15 epochs. Interestingly, we also find that PGD and free adversarial training take comparable amounts of time, largely because free adversarial training does not benefit from the cyclic learning rate as much as PGD or FGSM adversarial training." }, { "heading": "5.3 FAST IMAGENET", "text": "Finally, we apply all of the same techniques (FGSM adversarial training, mixed-precision, and cyclic learning rate) on the ImageNet benchmark. In addition, the top submissions from the DAWNBench competition for ImageNet utilize two more improvements on top of this, the first of which is the removal of weight decay regularization from batch normalization layers. The second addition is to progressively resize images during training, starting with larger batches of smaller images in the beginning and moving on to smaller batches of larger images later. Specifically, training is divided into three phases, where phases 1 and 2 use images resized to 160 and 352 pixels respectively, and phase 3 uses the entire image. We train models to be robust at = 2/255 and = 4/255 and compare to free adversarial training in Table 4, showing similar levels of robustness. In addition to using ten restarts, we also report the PGD accuracy with one restart to reproduce the evaluation done by Shafahi et al. (2019).\n5Runtimes calculated on our hardware using the publicly available training code at https://github. com/MadryLab/cifar10_challenge.\n6Runtimes calculated on our hardware using the publicly available training code at https://github. com/ashafahi/free_adv_train.\nWith these techniques, we can train an ImageNet classifier using 15 epochs in 12 hours using FGSM adversarial training, taking a fraction of the cost of free adversarial training as shown in Table 5.7 We compare to the best performing variation of free adversarial training which which uses m = 4 minibatch replays over 92 epochs of training (scaled down accordingly to 23 passes over the data). Note that free adversarial training can also be enhanced with mixed-precision arithmetic, which reduces the runtime by 25%, but is still slower than FGSM-based training. Directly combining free adversarial training with the other fast techniques used in FGSM adversarial training for ImageNet results in reduced performance which we describe in Appendix F." }, { "heading": "5.4 CATASTROPHIC OVERFITTING", "text": "While FGSM adversarial training works in the context of this paper, many other researchers have tried and failed to have FGSM adversarial training work. In addition to using a zero initialization or too large of a step size as seen in Table 1, other design decisions (like specific learning rate schedules or numbers of epochs) for the training procedure can also make it more likely for FGSM adversarial training to fail. However, all of these failure modes result in what we call “catastrophic overfitting”, where the robust accuracy with respect to a PGD adversarial suddenly and drastically drops to 0% (on the training data). Due to the rapid deterioration of robust performance, these alternative versions of FGSM adversarial training can be salvaged to some degree with a simple early-stopping scheme by measuring PGD accuracy on a small minibatch of training data, and the recovered results for some of these failure modes are shown in Table 1. Catastrophic overfitting and the early-stopping scheme are discussed in more detail in Appendix D." }, { "heading": "5.5 TAKEAWAYS FROM FGSM ADVERSARIAL TRAINING", "text": "While it may be surprising that FGSM adversarial training can result in robustness to full PGD adversarial attacks, this work highlights some empirical hypotheses and takeaways which we describe below.\n1. Adversarial examples need to span the entire threat model. One of the reasons why FGSM and R+FGSM as done by Tramèr et al. (2017) may have failed is due to the restricted nature of the generated examples: the restricted (or lack of) initialization results in perturbations which perturb each dimension by either 0 or ± , and so adversarial examples with feature perturbations in between are never seen. This is discussed further in Appendix D.\n2. Defenders don’t need strong adversaries during training. This work suggests that rough approximations to the inner optimization problem are sufficient for adversarial training. This is in contrast to the usage of strong adversaries at evaluation time, where it is standard practice to use multiple restarts and a large number of PGD steps." }, { "heading": "6 CONCLUSION", "text": "Our findings show that FGSM adversarial training, when used with random initialization, can in fact be just as effective as the more costly PGD adversarial training. While a single iteration of FGSM adversarial training is double the cost of free adversarial training, it converges significantly faster, especially with a cyclic learning rate schedule. As a result, we are able to learn adversarially robust classifiers for CIFAR10 in minutes and for ImageNet in hours, even faster than free adversarial training but with comparable levels of robustness. We believe that leveraging these significant reductions in time to train robust models will allow future work to iterate even faster, and accelerate research in learning models which are resistant to adversarial attacks. By demonstrating that extremely weak adversarial training is capable of learning robust models, this work also exposes a new potential direction in more rigorously explaining when approximate solutions to the inner optimization problem are sufficient for robust optimization, and when they fail.\n7We use the implementation of free adversarial training for ImageNet publicly available at https: //github.com/mahyarnajibi/FreeAdversarialTraining and reran it on the our machines to account for any timing discrepancies due to differences in hardware" }, { "heading": "A A DIRECT COMPARISON TO R+FGSM FROM TRAMÈR ET AL. (2017)", "text": "While a randomized version of FGSM adversarial training was proposed by Tramèr et al. (2017), it was not shown to be as effective as adversarial training against a PGD adversary. Here, we note the two main differences between our approach and that of Tramèr et al. (2017).\n1. The random initialization used is different. For a data point x, we initialize with the uniform distribution in the entire perturbation region with\nx′ = x+ Uniform(− , ).\nIn comparison, Tramèr et al. (2017) instead initialize on the surface of a hypercube with radius /2 with\nx′ = x+\n2 Normal(0, 1).\n2. The step sizes used for the FGSM step are different. We use a full step size of α = , whereas Tramèr et al. (2017) use a step size of α = /2.\nTo study the effect of these two differences, we run all combinations of either initialization with either step size on MNIST. The results are summarized in Table 6.\nWe find that using a uniform initialization adds the greatest marginal improvement to the original R+FGSM attack, while using a full step size doesn’t seem to help on its own. Implementing both of these improvements results in the form of FGSM adversarial training presented in this paper. Additionally, note that R+FGSM as done by Tramèr et al. (2017) has high variance in robust performance when done over multiple random seeds, whereas our version of FGSM adversarial training is significantly more consistent and has a very low standard deviation over random seeds." }, { "heading": "B TRAINING PARAMETERS FOR TABLE 1", "text": "For all methods, we use a batch size of 128, and SGD optimizer with momentum 0.9 and weight decay 5 ∗ 10−4. We report the average results over 3 random seeds. The remaining parameters for learning rate schedules and number of epochs for the DAWNBench experiments are in Table 7. For runs using early-stopping, we use a 5-step PGD adversary with 1 restart on 1 training minibatch to detect overfitting to the FGSM adversaries, as described in more detail in Section D." }, { "heading": "C OPTIMAL STEP SIZE FOR FGSM ADVERSARIAL TRAINING", "text": "Here, we test the effect of step size on the performance of FGSM adversarial training. We plot the mean and standard error of the robust accuracy for models trained for 30 epochs over 3 random seeds in Figure 3, and vary the step size from α = 1/255 to α = 16/255.\nWe find that we get increasing robust performance as we increase the step size up to α = 10/255. Beyond this, we see no further benefit, or find that the model is prone to overfitting to the adversarial examples, since the large step size forces the model to overfit to the boundary of the perturbation region." }, { "heading": "D CATASTROPHIC OVERFITTING AND THE EFFECT OF EARLY STOPPING", "text": "While the main experiments in this paper work as is (with the cyclic learning rate and FGSM adversarial training with uniform random initialization), many of the variations of FGSM adversarial training which have been found to not succeed all fail similarly: the model will very rapidly (over the span of a couple epochs) appear to overfit to the FGSM adversarial examples. What was previously a reasonably robust model will quickly transform into a non-robust model which suffers 0% robust accuracy (with respect to a PGD adversary). This phenomenon, which we call catastrophic overfitting, can be seen in Figure 4 which plots the learning curves for standard, vanilla FGSM adversarial training from zero-initialization.\nIndeed, one of the reasons for this failure may lie in the lack of diversity in adversarial examples generated by these FGSM adversaries. For example, using a zero initialization or using the random\ninitialization scheme from Tramèr et al. (2017) will result in adversarial examples whose features have been perturbed by {− , 0, }, and so the network learns a decision boundary which is robust only at these perturbation values. This can be verified by running a PGD adversarial attack on models which have catastrophically overfitted, where the perturbations tend to be more in between the origin and the boundary of the threat model (relative to a non-overfitted model, which tends to have perturbations near the boundary), as seen in Figure 5.\nThese failure modes, including the other failure modes discussed in Section 5.4, can be easily detected by evaluating the PGD performance on a small subset of the training data, as the catastrophic failure will result in 0% robust accuracy for a PGD adversary on the training set. In practice, we find that this can be a simple as a single minibatch with a 5-step PGD adversary, which can be quickly checked at the end of the epoch. If robust accuracy with respect to this adversary suddenly drops, then we have catastrophic overfitting. Using a PGD adversary on a training minibatch to detect catastrophic overfitting, we can early stop to avoid catastrophic overfitting and achieve a reasonable amount of robust performance.\nFor a concrete example, recall from Section C that step sizes larger than 11/255 result in 0% robust accuracy, due to this catastrophic overfitting phenomenon. By using early stopping to catch the model at its peak performance before overfitting, FGSM adversarial training with larger step sizes can actually achieve some degree of robust accuracy, as shown in Figure 6." }, { "heading": "E TRAINING PARAMETERS FOR FIGURE 2", "text": "For all methods, we use a batch size of 128, and SGD optimizer with momentum 0.9 and weight decay 5 ∗ 10−4. We report the average results over 3 random seeds. Maximum learning rates used for the cyclic learning rate schedule are shown in Table 8." }, { "heading": "F COMBINING FREE ADVERSARIAL TRAINING WITH DAWNBENCH IMPROVEMENTS ON IMAGENET", "text": "While adding mixed-precision is a direct speedup to free adversarial training without hurting performance, using other optimization tricks such as the cyclic learning rate schedule, progressive resizing, and batch-norm regularization may affect the final performance of free adversarial training. Since ImageNet is too large to run a comprehensive search over the various parameters as was done for CIFAR10 in Table 3, we instead test the performance of free adversarial training when used as a drop-in replacement for FGSM adversarial training with all the same optimizations used for FGSM adversarial training. We use free adversarial training with m = 3 minibatch-replay, with 2 epochs for phase one, 2 epochs for phase two, and 1 epoch for phase three to be equivalent to 15 epochs of standard training. PGD+N denotes the accuracy under a PGD adversary with N restarts.\nA word of caution: this is not to claim that free adversarial training is completely incompatible with the DAWNBench optimizations on ImageNet. By giving free adversarial training more epochs, it may be possible recover the same or better performance. However, tuning the DAWNBench techniques to be optimal for free adversarial training is not the objective of this paper, and so this is merely to show what happens if we naively apply the same DAWNBench tricks used for FGSM adversarial training to free adversarial training. Since free adversarial training requires more epochs even when tuned with DAWNBench improvements for CIFAR10, we suspect that the same behavior occurs here for ImageNet, and so 15 epochs is likely not enough to obtain top performance for free adversarial training. Since one epoch of FGSM adversarial training is equivalent to two epochs of free training, a fairer comparison is to give free adversarial training 30 epochs instead of 15. Even with double the epochs (and thus the same compute time as FGSM adversarial training), we find that it gets closer but doesn’t quite recover the original performance of free adversarial training." } ]
2,020
null
SP:eb8b8a0bae8d3f488caf70b6103ed3fd9631cb9f
[ "This paper introduces a better searching strategy in the context of automatic neural architecture search (NAS). Especially, they focus on improving the search strategy for previously proposed computationally effective weight sharing methods for NAS. Current search strategies for the weight sharing NAS methods either focus on uniformly training all the network paths or selectively train different network paths with different frequency, where both have their own issues like wasting resources for unpromising candidates and unfair comparison among network paths. To this end, this paper proposes a balanced training strategy with “selective drop mechanism”. Further, they validate their approach by showing leading performance on ImageNet under mobile settings.", "In this paper, the authors proposed a new training strategy in achieving better balance between training efficiency and evaluation accuracy with weight sharing-based NAS algorithms. It is consisted of two phrases: in phrase 1, all path are uniformly trained to avoid bias, in phrase 2, less competitive options are pruned to save cost. The proposed method achieved the SOTA on IN mobile setting. " ]
Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.
[]
[ { "authors": [ "Irwan Bello", "Barret Zoph", "Vijay Vasudevan", "Quoc V Le" ], "title": "Neural optimizer search with reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": null, "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "arXiv preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Minghao Guo", "Zhao Zhong", "Wei Wu", "Dahua Lin", "Junjie Yan" ], "title": "Irlas: Inverse reinforcement learning for architecture search", "venue": "arXiv preprint arXiv:1812.05285,", "year": 2018 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": null, "year": 1904 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": null, "year": 1905 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Xin Li", "Yiming Zhou", "Zheng Pan", "Jiashi Feng" ], "title": "Partial order pruning: for best speed/accuracy trade-off in neural architecture search", "venue": null, "year": 1903 }, { "authors": [ "Hanwen Liang", "Shifeng Zhang", "Jiacheng Sun", "Xingqiu He", "Weiran Huang", "Kechen Zhuang", "Zhenguo Li" ], "title": "Darts+: Improved differentiable architecture search with early stopping", "venue": null, "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Asaf Noy", "Niv Nayman", "Tal Ridnik", "Nadav Zamir", "Sivan Doveh", "Itamar Friedman", "Raja Giryes", "Lihi Zelnik-Manor" ], "title": "Asap: Architecture search, anneal and prune", "venue": null, "year": 1904 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Christian Sciuto", "Kaicheng Yu", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": null, "year": 1902 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": null, "year": 1904 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "arXiv preprint arXiv:1807.11626,", "year": 2018 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.03443,", "year": 2018 }, { "authors": [ "Shen Yan", "Biyi Fang", "Faen Zhang", "Yu Zheng", "Xiao Zeng", "Hui Xu", "Mi Zhang" ], "title": "Hm-nas: Efficient neural architecture search via hierarchical masking", "venue": null, "year": 2019 }, { "authors": [ "Zhao Zhong", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Practical block-wise neural network architecture generation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhao Zhong", "Zichen Yang", "Boyang Deng", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Blockqnn: Efficient block-wise neural network architecture generation", "venue": "arXiv preprint arXiv:1808.05584,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": null, "text": "Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency." }, { "heading": "1 INTRODUCTION", "text": "The fast developing of artificial intelligence has raised the demand to design powerful neural networks. Automatic neural architecture search methods (Zoph & Le, 2016; Zhong et al., 2018; Pham et al., 2018) have shown great effectiveness in recent years. Among them, methods based on weight sharing (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) show great potentials on searching architectures with limited computational resources. These methods are divided into 2 categories: alternatively training ones (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018) and oneshot based ones (Brock et al., 2017; Bender et al., 2018). As shown in Fig 2, both categories construct a super-net to reduce computational complexity. Methods in the first category parameterize the structure of architectures with trainable parameters and alternatively optimize architecture parameters and network parameters. In contrast, one-shot based methods train network parameters to convergence beforehand and then select architectures with fixed parameters. Both categories achieve better performance with significant efficiency improvement than direct search.\nDespite of these remarkable achievements, methods in both categories are limited in their searching strategies. In alternatively training methods, network parameters in different branches are applied with different training frequency or updating strength according to searching strategies, which makes different sub-network convergent to different extent. Therefore the performance of sub-networks extracted from super-net can not reflect the actual ability of that trained independently without weight sharing. Moreover, some paths might achieve better performance at early steps while perform not well when actually trained to convergence. In alternatively training methods, these operators will get more training opportunity than other candidates at early steps due to their well performance. Sufficient training in turn makes them perform better and further obtain more training opportunities, forming the Matthew Effect. In contrast, other candidates will be always trained insufficiently and can never show their real ability.\nDifferently, One-shot methods train paths with roughly equal frequency or strength to avoid the Matthew Effect between parameters training and architectures selection. However, training all paths to convergence costs multiple time. Besides, the operators are shared by plenty of sub-networks, making the backward gradients from different training steps heavily conflict. To address this issue,\nwe follows the balanced training strategy to avoid the Matthew Effect, and propose a drop paths approach to reduce mutual interference among paths, as shown in Fig 1.\nExperiments are conducted on ImageNet classification task. The searching process costs less computational resources than competing methods and our searched architecture achieves an outstanding accuracy of 79.0%, which outperforms state-of-the-art methods under mobile settings. The proposed method is compared with other competing algorithms with visualized analysis, which demonstrates its the effectiveness. Moreover, we also conduct experiments to analysis the mutual interference in weight sharing and demonstrate the rationality of the gradually drop paths strategy." }, { "heading": "2 RELATED WORK", "text": "Automatic neural architecture search techniques has attracted much attention in recent years. NASNet (Zoph & Le, 2016; Zoph et al., 2018) proposes a framework to search for architectures with reinforcement learning, and evaluates each of the searched architectures by training it from scratch. BlockQNN (Zhong et al., 2018; Guo et al., 2018; Zhong et al., 2018) expands the search space to the entire DAG and selects nets with Q-learning. Network pruning methods (Li et al., 2019; Noy et al., 2019) prune redundant architectures to reduces search spaces. Considering the searching policy, most of these methods depend on reinforcement learning, evolutionary algorithms and gradient based algorithms (Bello et al., 2017; Liu et al., 2018; Cai et al., 2018).\nThe most related works to our method are the ones based on weight sharing proposed by (Pham et al., 2018), from which two streams are derived: Alternatively training methods (Cai et al., 2018; Liu et al., 2018) and one-shot methods (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). Methods in the first stream alternatively train architecture parameters and network parameters. During search process, operators in the super-net are selectively trained and evaluated with a certain policy and the policy is updated dynamically according to the evaluations. Among them, ENAS (Pham et al., 2018) introduces RL to select paths. DARTS (Liu et al., 2018) improves the accuracy and efficiency of paths selection policy by considering the importance of each path as trainable parameters. ProxyLessNAS (Cai et al., 2018) proposes to directly search on target datasets with single paths and makes the latency term differentiable. Single-Path NAS (Stamoulis et al., 2019) directly shares\nweights via super-kernel. By contrast, one-shot based methods Guo et al. (2019); Brock et al. (2017); Bender et al. (2018) firstly train each path in the super-net with equal frequency to convergence, then all the architectures are selected from the super-net and evaluated with fixed parameters. Darts+ Liang et al. (2019) improves Darts with early stop. Progressive-NAS Chen et al. (2019) gradually increases blocks while searching. HM-NAS Yan et al. (2019) uses mask to select paths. (Sciuto et al., 2019) Our work benefits from the advantages of both categories: On one hand, the importance factors are evaluated with a gradient based approach but it has no influence on training shared parameters. On the other hand, the shared parameters are updated uniformly as those in one-shot." }, { "heading": "3 APPROACH", "text": "ProxyLessNAS(Cai et al., 2018) and Single Path One-shot(Guo et al., 2019) proposed to train the super-net with only one path on in each step to make the performance trained with weight sharing more close to that trained alone. Both of them enhance the performance of weight sharing to a higher level. ProxyLessNAS updates architecture parameters and network parameters alternatively. Paths are selectively trained according to their performance, and paths with higher performance get more training opportunities. Single Path One-shot first proposed to balanced train all paths until convergence and then use evolution algorithms to select network structures. The equivalent functions of the choice blocks in two methods are described as mPL and mOS in Eq 1:\nmPL(x) = o1(x) with probability p1, . . . ,\noN (x) with probability p2. ,mOS(x) =\n o1(x) with probability 1/N, ...,\noN (x) with probability 1/N. (1)\nOur method follows the alternatively training ones, in which architecture parameters and network parameters are optimized alternatively in each step. To give a better solution to the problems discussed above, we train each candidate path with equal frequency to avoid the \"Matthew effect\" and gradually dropped least promising paths during searching process to reduce conflicts among candidate paths." }, { "heading": "3.1 PIPELINE", "text": "The pipeline of our method is shown in Algorithm 1. First of all, a super-net is constructed with L choice blocks O1, O2, . . . , OL, as shown in Fig 1. Each choice block Ol is composed of M candidate paths and corresponding operators ol,1, ol,2, . . . , ol,M . The importance factor of ol,m is denoted as αl,m and αl,m are converted to probability factor pl,m using softmax normalization.\nSecondly, the parameters of ol,m and their importance factors αl,m are trained alternatively in Phase 1 and Phase 2. When training αl,m, latency term is introduced to balance accuracy and complexity. Paths with αl,m lower than thα will be dropped and no more trained.\nAlgorithm 1 Searching Process Initialization: Denote Ol as the choice block for layer l with M candidate operators {ol,1, ol,2, . . . , ol,M}. αl,1, αl,2, . . . , αl,M are the corresponding importance factors of candidate operators and initialized with identical value. Smax denotes the max number of optimization steps.\n1: while t < Smax do 2: Phase1: Randomly select ol,ml ∈ Ol for block Ol with uniform probability, then fix all αl,m\nand train the super-net constructed with the selected o1,m1 , o2,m2 , . . . , oL,mL for some steps. 3: Phase2: Fix all the parameters in ol,m and measure their flops/latency. Then evaluate each\noperator ol,m with both cross-entropy loss and flops/latency loss. Update αl,m according to the losses feedback.\n4: for ol,m ∈ Ol do 5: if αl,m < thα then Ol = Ol \\ {ol,m} t = t+ 1 6: for ol,m ∈ Ol do ml = argmaxm(αl,m) 7: return o1,m1 , o2,m2 , . . . , oL,mL\nFinally, after alternatively training ol,m and αl,m for given steps, paths with the highest importance factor in each choice block are selected to compose a neural architecture as the searching result." }, { "heading": "3.2 BALANCED TRAINING", "text": "Alternatively training methods focus computational resources on most promising candidates to reduce the interference from redundant branches. However, some operators that perform well at early phases might not perform as well when they are trained to convergence. These operators might get much more training opportunities than others due to their better performance at the beginning steps. Higher training frequency in turn maintains their dominant position in the following searching process regardless their actual ability, forming the Matthew Effect. In contrast, the operators with high performance when convergent might never get opportunities to trained sufficiently. Therefore, the accuracy of alternatively training methods might degrade due to inaccurate evaluations and comparison among candidate operators.\nOur method follows the alternatively optimizing strategy. Differently, we only adopt gradient to architectures optimization while randomly sample paths with uniformly probability when training network parameters to avoid the Matthew Effect. More specifically, when updating network parameters of ol,m in Phase 1 and architecture parameters in Phase 2, the equivalent output of choice block Ol is given as Opathl in Eq 2 and O arch l in Eq 3:\nOpathl (x) = ol,m(x) { with probability 1M ′ , if αl,m > thα with probability 0 , else.\n(2)\nOarchl (x) = ol,m(x) { with probability pl,m , if αl,m > thα with probability 0 , else.\n(3)\nWhere M ′ is the number of remaining operators in Ol currently, and pl,m is the softmax form of αl,m. The αl,m of dropped paths are not taken into account when calculating pl,m. The parameters in both phases are optimized with Stochastic Gradient Descent (SGD). In Phase 1, the outputs in Eq 2 only depends on network parameters, thus gradients can be calculated with the Chain Rule. In Phase 2, the outputs not only depend on the fixed network parameters but also architecture parameters αl,m. Note that Oarchl (x) is not differentiable with respect to αl,m, thus we introduce the manually defined derivatives proposed by Cai et al. (2018) to deal with this issue: Eq 3 can be expressed as Oarchl (x) = ∑ gl,m · ol,m(x), where gl,0, gl,0, . . . , gl,M ′ is a one-hot vector with only one element equals to 1 while others equal to 0. Assuming ∂gl,j/∂pl,j ≈ 1 according to Cai et al. (2018), the derivatives of Oarchl (x) w.r.t. αl,m are defined as :\n∂Oarchl (x)\n∂αl,m = M ′∑ j=1 ∂Oarchl (x) ∂gl,j ∂gl,j ∂pl,j ∂pl,j ∂αl,m ≈ M ′∑ j=1 ∂Oarchl (x) ∂gl,j ∂pl,j ∂αl,m\n= M ′∑ j=1 ∂Oarchl (x) ∂gl,j pj(δmj − pm)\n(4)\nFrom now on, Opathl (x) and O arch l (x) are differentiable w.r.t. network parameters and architecture parameters respectively. Both parameters can be optimized alternatively in Phase 1 and Phase 2." }, { "heading": "3.3 SELECTIVELY DROP PATHS", "text": "One-shot based methods, such as Single Path One-shot Guo et al. (2019) also uniformly train paths. These methods train network parameters of each path uniformly to convergence, after that a searching policy is applied to explore a best structure with fixed network parameters. However, the optimizations of candidate operators in a same choice block actually conflict. Considering N candidate operators in a same choice block and their equivalent functions f1, f2, . . . , fN , given Fin and Fout as the equivalent functions of the sub-supernet before and after the current choice block, xi and yi as input\ndata and labels from the training dataset, and L as the loss metric, the optimization of network parameters can be described as:\nmin wn L(yi, Fout(fn(Fin(xi), wn))), n = 1, 2, . . . , N (5)\nWhen the super-net is trained to convergence, Fin and Fout is comparatively stable. In this situation, f1(w1), f2(w2), . . . , fN (wN ) are actually trained to fit a same function. However when operators are trained independently without weight sharing, different operators are unlikely to output same feature maps. Take the super-net in Fig 2(b) as an example, the four operators in choice block 1 are likely to be optimized to fit each other. Intuitively, Conv3 and Identity are unlikely to fit a same function when trained without weight sharing. On the other hand, the operators in the second choice block are trained to be compatible with various input features from different operators in the first choice block. In contrast, each operator process data from only one input when networks are trained independently. Both problems widen the gap between network trained with and without weight sharing.\nFewer candidate paths help reduce the conflicts among operators, which will explained in the experiments section. Therefore, a drop paths strategy is applied in our method to reduce mutual interference among candidate operators during searching process. The paths with performance lower than a threshold will be permanently dropped to reduce its influence on remaining candidates.\nWhen updating αl,m in phase 2, we follow the strategy in ProxyLessNAS (Cai et al., 2018) to sample path in each choice block with probability pl,m, and optimize αl,m by minimizing the expectation joint loss L:\nL = LCE + βLLA = LCE + β L∑ l=1 M∑ m=1 pl,mLLAl,m (6)\nwhere LCE and LLA are the expectation cross-entropy loss and latency loss, LLAl,m is the flops or latency of operator ol,m and β is a hyper-parameter to balance two loss terms. We regard the importance factor αl,m and its softmax form pl,m as sampling probability and use Eq 6 to optimize αl,m. The derivatives of LCE and LLA w.r.t. αl,m can be get from Eq 4 and 6 respectively. Note that αl,m is only applied to evaluate the importance of paths and have no influence on the balanced training strategy in Phase 1. After each step of evaluation, paths with low αl,m are dropped and will not be trained anymore:\nOl = Ol \\ {ol,ml , if αl,ml < thα,∀ml} (7)\nThe limitations of alternatively training methods and one-shot based ones are relieved by the proposed balanced training and drop paths strategies respectively. The two phases are trained alternatively until meeting the stop condition, then paths with highest αl,m from each block are selected to compose an architecture as the searching result." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETTINGS", "text": "Datasets The target architecture is directly searched on ImageNet (Deng et al., 2009) for classification task. 50000 images are extracted from training set to train architecture parameters αl,m, and the rest of training dataset is used to train network weights.\nSearch Space We follow the search space in MNASNet (Tan et al., 2018), where each choice block includes an identity operator and 6 MobileNetV2 blocks which have kernel size 3,5,7 and expand ratio 3, 6 respectively. SE-Layer (Hu et al., 2018) is not applied to the search space.\nTraining Detail We search on V100 GPUs for 160 GPU hours. The shared parameters are trained with 1024 batch size and 0.1 learning rate. αl,m is trained with Adam optimizer and 1e-3 initial learning rate. Finally, the searched architecture is trained from scratch according to the setting of MobileNetV2 (Sandler et al., 2018). The searched networks are trained from scratch on training dataset with hyper-parameters as follows: batch size 2048, learning rate 1.4, weight decay 2e-5, cosine learing rate for 350 epochs, dropout 0.2, label smoothing rate 0.1." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Searched Archtectures As shown in Fig 3, BetaNet-A and BetaNet-B are searched with flops and GPU latency limitation respectively. BetaNet-A tends to select operators with lower flops at front layers where feature maps are large and operators with higher flops elsewhere to enhance the ability. BetaNet-B tends to select large kernels and fewer layers, since GPU performs better with parallelism.\nPerformance on ImageNet Experiments results compared with state-of-the-art methods under comparable flops and gpu latency are shown in Table 1 and 2 respectivly and our architectures\nachieve the best performance. SENet (Hu et al., 2018) with ratio 0.0625 is applied in table 1 as BetaNet-A + SE. BetaNet-A performs better than MobileNetV2 with comparable flops by 3.1%. BetaNet-B performs better with comparable latency by 3.8%.Auto-augment (Cubuk et al., 2018) and SWISH activation (Ramachandran et al., 2017) are also applied to the searched BetaNet-A and the performance is further enhanced to 79.0%. As shown in Fig 4, BetaNet-A outperforms MobileNetV2, ProxyLessNAS (Cai et al., 2018) and MNASNet (Tan et al., 2018) with various depth multiplier." }, { "heading": "4.3 ABLATION STUDIES", "text": "" }, { "heading": "4.3.1 COMPARED WITH GRADIENT BASED METHODS", "text": "Experiments are conducted in this sub-section to analyze the contribution of the proposed searching approach: balanced training and selective drop. The searching process of BetaNet-A is compared with that of ProxyLessNAS (Cai et al., 2018). Fig 5(a) and (b) show the variation of architecture parameters α and updating frequency of 2 choice blocks for each method respectively. In the result of ProxyLessNAS shown in Fig 5(a) (top left), the conv3_exp6 operator performs better at early phases and get much more training opportunity than other operators and thus trained more sufficiently, which might lead to degraded performance due to the above mentioned \"Matthew Effect\". Differently, our strategy ensures that all remaining paths are trained with roughly equal frequency. In addition, our\nmethods train remaining with much more frequency than ProxyLessNAS, while ProxyLessNAS spend much steps on redundant paths with little promising." }, { "heading": "4.3.2 EXPERIMENTS WITH DIFFERENT RANDOM SEEDS", "text": "(Sciuto et al., 2019) suggests to evaluate weight sharing methods with different random seeds. To examine the robustness of the proposed method, experiments are conducted with our approach with 8 random seeds. 8 architectures are searched independently. We also randomly sample 8 architectures from the same search space and super-net. Constraint is applied to insure both searched and randomly sampled networks have similar flops with BetaNetA. All the 16 networks are trained with the same setting. As shown in Fig 6, all the searched architectures have similar performance on ImageNet. Our method outperforms the random sampled ones and other competing methods with robust performance." }, { "heading": "4.3.3 ANALYSIS OF DROP PATH", "text": "Besides the policy of selecting architecture parameters, the optimization of network parameters of our method is different from that of one-shot methods mainly in the drop paths strategy. To give a better understanding of its effectiveness, we conduct experiments on 4 network architectures with 3 different training policies to explore the influence of the number of candidate branches on weight sharing. As shown in Figure7, networks in 3 groups are trained with none weight sharing(NS), 2-branch weight sharing branches(B2) and 4-branch weight sharing branches(B4), respectively. Networks in NS, B2, B4 are trained on cifar10 for 30, 60, 120 epochs respectively to insure network parameters are trained with comparable steps in each group.\nThe accuracy of the 4 networks trained with NS, B2 and B4 are shown in Fig7 respectively. The experiments indicate 2 phenomenons: 1. The accuracy trained via B2 with 60 epochs is much higher than B4 with 120 epochs, which indicates that less branches in weight sharing helps network parameters converge better. 2. The relative rank of the 4 networks trained with B2 is more similar to those with NS than those with B4, which indicates that less branch can give a better instruction in network selection and demonstrate the rationality of our drop paths strategy." }, { "heading": "4.3.4 COMPARISON WITH OTHER WEIGHT SHARING METHODS ON SEARCH STRATEGIES", "text": "Our searching and optimizing strategy is compared to those of other weight sharing approaches, as shown in Tab 3. Alternatively training methods tend to adopt similar strategies to decide which path to train in optimizing phases and which path to evaluate in search phases. However searching strategies are designed to select paths and inevitably treat candidates discriminatively. Therefore,\nsearching strategies are usually not suitable for optimizing phases. Our method applies different policies in optimizing phases and searching phases to deal with this problem.\nSome of the methods sample a single sub-net in each step while training super-net instead of summing up features from all possible paths to bridge the gap between training super-net and evaluating subnets. Our methods follow this strategy to improve performance and reduce GPU memory consumption as well. To deal with mutual interference among candidates in a same choice block. ProgressiveDarts Chen et al. (2019) and our method make effort to drop redundant paths to reduce the interference from them." }, { "heading": "4.4 DISCUSSION ON TWO STREAMS OF WEIGHT SHARING METHODS", "text": "There is always a compromise between accuracy and efficiency. Weight sharing shows remarkable improvements in reducing searching cost though introduce inevitable bias on accuracy. Alternatively training approaches are actually greedy methods, since they are talented at searching for next optimal solution. Meanwhile, architectures with less competition at early phases are abandoned. In contrast, one-shot methods attempt offer equal opportunities to all candidates which leads to a more global solution. However operators via weight sharing should deal with outputs from multiple former paths, which is challenging for operators with less flops. Therefore these operators suffer more from the mutual interference than those with larger flops.\nOur approach tries to balance the advantages and disadvantages of both streams. On one hand, we tries to insure the accuracy of most promising operators which is similar to the strategies in alternatively training ones. By contrast, only the operators with performance much lower than the average of others will be dropped. On the other hand, we train paths balanced follows the strategies one-shot based ones. Unlikely, paths are gradually dropped to a lower amount in our method to reduce conflicts." }, { "heading": "5 CONCLUSION", "text": "This work proposes a novel neural architecture search method via balanced training and selective drop strategies. The proposed methods benefits from both streams of weight sharing approaches and relieve their limitations in optimizing the parameters in super-net. Moreover, our method achieves a\nnew state-of-the-art result of 79.0% on ImageNet under mobile settings with even less searching cost, which demonstrates its effectiveness." } ]
2,019
BETANAS: BALANCED TRAINING AND SELECTIVE DROP FOR NEURAL ARCHITECTURE SEARCH
SP:1f95868a91ef213ebf3be6ca2a0f059e93b4be37
[ "The paper proposes to use autoencoder for anomaly localization. The approach learns to project anomalous data on an autoencoder-learned manifold by using gradient descent on energy derived from the autoencoder's loss function. The proposed method is evaluated using the anomaly-localization dataset (Bergmann et al. CVPR 2019) and qualitatively for the task of image inpainting task on CelebA dataset.", "This paper discusses an important problem of solving the visual inspection problem limited supervision. It proposes to use VAE to model the anomaly detection. The major concern is how the quality of f_{VAE} is estimated. From the paper it seems f_{VAE} is not updated. Will it be sufficient to rely a fixed f_{VAE} and blindly trust its quality?" ]
Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal features of the data, allowing the segmentation of anomalous pixels in an image via a simple comparison between the image and its autoencoder reconstruction. In practice however, local defects added to a normal image can deteriorate the whole reconstruction, making this segmentation challenging. To tackle the issue, we propose in this paper a new approach for projecting anomalous data on a autoencoder-learned normal data manifold, by using gradient descent on an energy derived from the autoencoder’s loss function. This energy can be augmented with regularization terms that model priors on what constitutes the user-defined optimal projection. By iteratively updating the input of the autoencoder, we bypass the loss of high-frequency information caused by the autoencoder bottleneck. This allows to produce images of higher quality than classic reconstructions. Our method achieves state-of-the-art results on various anomaly localization datasets. It also shows promising results at an inpainting task on the CelebA dataset.
[ { "affiliations": [], "name": "David Dehaene" }, { "affiliations": [], "name": "Oriel Frigo" }, { "affiliations": [], "name": "Sébastien Combrexelle" }, { "affiliations": [], "name": "Pierre Eline AnotherBrain" } ]
[ { "authors": [ "Fazil Altinel", "Mete Ozay", "Takayuki Okatani" ], "title": "Deep structured energy-based image inpainting", "venue": "24th International Conference on Pattern Recognition (ICPR),", "year": 2018 }, { "authors": [ "Jinwon An", "Sungzoon Cho" ], "title": "Variational autoencoder based anomaly detection using reconstruction probability", "venue": "Technical report, SNU Data Mining Center,", "year": 2015 }, { "authors": [ "Christoph Baur", "Benedikt Wiestler", "Shadi Albarqouni", "Nassir Navab" ], "title": "Deep autoencoding models for unsupervised anomaly segmentation in brain", "venue": "MR images. CoRR,", "year": 2018 }, { "authors": [ "Paul Bergmann", "Sindy Löwe", "Michael Fauser", "David Sattlegger", "Carsten Steger" ], "title": "Improving unsupervised defect segmentation by applying structural similarity to autoencoders", "venue": null, "year": 1807 }, { "authors": [ "Paul Bergmann", "Michael Fauser", "David Sattlegger", "Carsten Steger" ], "title": "Mvtec ad — a comprehensive real-world dataset for unsupervised anomaly detection", "venue": null, "year": 2019 }, { "authors": [ "Marcelo Bertalmio", "Guillermo Sapiro", "Vincent Caselles", "Coloma Ballester" ], "title": "Image inpainting", "venue": "In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques,", "year": 2000 }, { "authors": [ "Antonio Criminisi", "Patrick Pérez", "Kentaro Toyama" ], "title": "Region filling and object removal by exemplar-based image inpainting", "venue": "Trans. Img. Proc.,", "year": 2004 }, { "authors": [ "Bin Dai", "David P. Wipf" ], "title": "Diagnosing and enhancing", "venue": "VAE models. CoRR,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ian J. Goodfellow" ], "title": "NIPS 2016 tutorial", "venue": "Generative adversarial networks. CoRR,", "year": 2017 }, { "authors": [ "Douglas M Hawkins" ], "title": "Identification of outliers. Monographs on applied probability and statistics", "venue": "Chapman and Hall, London [u.a.],", "year": 1980 }, { "authors": [ "Matthew D. Hoffman", "Matthew J. Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In NIPS 2016 Workshop on Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "Oleg Ivanov", "Michael Figurnov", "Dmitry Vetrov" ], "title": "Variational autoencoder with arbitrary conditioning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Takashi Matsubara", "Ryosuke Tachibana", "Kuniaki Uehara" ], "title": "Anomaly machine component detection by deep generative model with unregularized", "venue": "score. CoRR,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Jeff Donahue", "Trevor Darrell", "Alexei Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Suman V. Ravuri", "Oriol Vinyals" ], "title": "Classification accuracy score for conditional generative models", "venue": null, "year": 1905 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M. Waldstein", "Georg Langs", "Ursula" ], "title": "SchmidtErfurth. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks", "venue": "Medical Image Analysis,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Hoang Thanh-Tung", "Truyen Tran", "Svetha Venkatesh" ], "title": "Improving generalization and stability of generative adversarial networks", "venue": null, "year": 1902 }, { "authors": [ "David Zimmerer", "Jens Petersen", "Simon A.A. Kohl", "Klaus H. Maier-Hein" ], "title": "A case for the score: Identifying image anomalies using variational autoencoder gradients", "venue": "In 32nd Conference on Neural Information Processing Systems (NeurIPS", "year": 2018 }, { "authors": [ "David Zimmerer", "Fabian Isensee", "Jens Petersen", "Simon Kohl A. A", "Klaus H. Maier-Hein" ], "title": "Unsupervised anomaly localization using variational auto-encoders", "venue": null, "year": 1907 }, { "authors": [ "Zimmerer" ], "title": "2019) proposed to perform anomaly localization using different scores derived from the gradient of the VAE loss. In particular, it has been shown that the product of the VAE reconstruction error with the gradient of the KL divergence was very informative for medical images. In table 2 we compare the pixel-wise anomaly detection", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Automating visual inspection on production lines with artificial intelligence has gained popularity and interest in recent years. Indeed, the analysis of images to segment potential manufacturing defects seems well suited to computer vision algorithms. However these solutions remain data hungry and require knowledge transfer from human to machine via image annotations. Furthermore, the classification in a limited number of user-predefined categories such as non-defective, greasy, scratched and so on, will not generalize well if a previously unseen defect appears. This is even more critical on production lines where a defective product is a rare occurrence. For visual inspection, a better-suited task is unsupervised anomaly detection, in which the segmentation of the defect must be done only via prior knowledge of non-defective samples, constraining the issue to a two-class segmentation problem.\nFrom a statistical point of view, an anomaly may be seen as a distribution outlier, or an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism (Hawkins, 1980). In this setting, generative models such as Variational AutoEncoders (VAE, Kingma & Welling (2014)), are especially interesting because they are capable to infer possible sampling mechanisms for a given dataset. The original autoencoder (AE) jointly learns an encoder model, that compresses input samples into a low dimensional space, and a decoder, that decompresses the low dimensional samples into the original input space, by minimizing the distance between the input of the encoder and the output of the decoder. The more recent variant, VAE, replaces the deterministic encoder and decoder by stochastic functions, enabling the modeling of the distribution of the dataset samples as well as the generation of new, unseen samples. In both models, the output decompressed sample given an input is often called the reconstruction, and is used as some sort of projection of the input on the support of the normal data distribution, which we will call the normal manifold. In most unsupervised anomaly detection methods based on VAE, models are trained on flawless data and defect detection and localization is then performed using a\n∗Equal contributions.\ndistance metric between the input sample and its reconstruction (Bergmann et al., 2018; 2019; An & Cho, 2015; Baur et al., 2018; Matsubara et al., 2018).\nOne fundamental issue in this approach is that the models learn on the normal manifold, hence there is no guarantee of the generalization of their behavior outside this manifold. This is problematic since it is precisely outside the dataset distribution that such methods intend to use the VAE for anomaly localization. Even in the case of a model that always generates credible samples from the dataset distribution, there is no way to ensure that the reconstruction will be connected to the input sample in any useful way. An example illustrating this limitation is given in figure 1, where a VAE trained on regular grid images provides a globally poor reconstruction despite a local perturbation, making the anomaly localization challenging.\nIn this paper, instead of using the VAE reconstruction, we propose to find a better projection of an input sample on the normal manifold, by optimizing an energy function defined by an autoencoder architecture. Starting at the input sample, we iterate gradient descent steps on the input to converge to an optimum, simultaneously located on the data manifold and closest to the starting input. This method allows us to add prior knowledge about the expected anomalies via regularization terms, which is not possible with the raw VAE reconstruction. We show that such an optimum is better than previously proposed autoencoder reconstructions to localize anomalies on a variety of unsupervised anomaly localization datasets (Bergmann et al., 2019) and present its inpainting capabilities on the CelebA dataset (Liu et al., 2015). We also propose a variant of the standard gradient descent that uses the pixel-wise reconstruction error to speed up the convergence of the energy." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 GENERATIVE MODELS", "text": "In unsupervised anomaly detection, the only data available during training are samples x from a non-anomalous dataset X ⊂ Rd. In a generative setting, we suppose the existence of a probability function of density q, having its support on all Rd, from which the dataset was sampled. The generative objective is then to model an estimate of density q, from which we can obtain new samples close to the dataset. Popular generative architectures are Generative Adversarial Networks (GAN, Goodfellow et al. (2014)), that concurrently train a generator G to generate samples from random, low-dimensional noise z ∼ p, z ∈ Rl, l d, and a discriminator D to classify generated samples and dataset samples. This model converges to the equilibrium of the expectation over both real and generated datasets of the binary cross entropy loss of the classifier minG maxD [ Ex∼q [log(D(x))] + Ez∼p [log(1 − D(G(z)))] ]. Disadvantages of GANs are that they are notoriously difficult to train (Goodfellow, 2017), and they suffer from mode collapse, meaning that they have the tendency to only generate a subset of the original dataset. This can be problematic for anomaly detection, in which we do not want some subset of the normal data to be considered as anomalous (Bergmann et al., 2019). Recent works such as Thanh-Tung et al. (2019) offer simple and attractive explanations for GAN behavior and propose substantial upgrades, however Ravuri & Vinyals (2019) still support the point that GANs have more trouble than other generative models to cover the whole distribution support.\nAnother generative model is the VAE (Kingma & Welling (2014)), where, similar to a GAN generator, a decoder model tries to approximate the dataset distribution with a simple latent variables prior p(z), with z ∈ Rl, and conditional distributions output by the decoder p(x|z). This leads to the estimate p(x) = ∫ p(x|z)p(z)dz, that we would like to optimize using maximum likelihood estimation on the dataset. To render the learning tractable with a stochastic gradient descent (SGD) estimator with reasonable variance, we use importance sampling, introducing density functions q(z|x) output by an encoder network, and Jensen’s inequality to get the variational lower bound :\nlog p(x) = log Ez∼q(z|x) p(x|z)p(z) q(z|x)\n≥ Ez∼q(z|x) log p(x|z)−DKL(q(z|x)‖p(z)) = −L(x) (1)\nWe will use L(x) as our loss function for training. We define the VAE reconstruction, per analogy with an autoencoder reconstruction, as the deterministic sample fV AE(x) that we obtain by encoding x, decoding the mean of the encoded distribution q(z|x), and taking again the mean of the decoded distribution p(x|z). VAEs are known to produce blurry reconstructions and generations, but Dai & Wipf (2019) show that a huge enhancement in image quality can be gained by learning the variance of the decoded distribution p(x|z). This comes at the cost of the distribution of latent variables produced by the encoder q(z) being farther away from the prior p(z), so that samples generated by sampling z ∼ p(z),x ∼ p(x|z) have poorer quality. The authors show that using a second VAE learned on samples from q(z), and sampling from it with ancestral sampling u ∼ p(u), z ∼ p(z|u),x ∼ p(x|z), allows to recover samples of GAN-like quality. The original autoencoder can be roughly considered as a VAE whose encoded and decoded distributions have infinitely small variances." }, { "heading": "2.2 ANOMALY DETECTION AND LOCALIZATION", "text": "We will consider that an anomaly is a sample with low probability under our estimation of the dataset distribution. The VAE loss, being a lower bound on the density, is a good proxy to classify samples between the anomalous and non-anomalous categories. To this effect, a threshold T can be defined on the loss function, delimiting anomalous samples with L(x) ≥ T and normal samples L(x) < T . However, according to Matsubara et al. (2018), the regularization term LKL(x) = DKL(q(z|x)‖p(z)) has a negative influence in the computation of anomaly scores. They propose instead an unregularized score Lr(x) = −Ez∼q(z|x) log p(x|z) which is equivalent to the reconstruction term of a standard autoencoder and claim a better anomaly detection.\nGoing from anomaly detection to anomaly localization, this reconstruction term becomes crucial to most of existing solutions. Indeed, the inability of the model to reconstruct a given part of an image is used as a way to segment the anomaly, using a pixel-wise threshold on the reconstruction error. Actually, this segmentation is very often given by a pixel-wise (An & Cho, 2015; Baur et al., 2018; Matsubara et al., 2018) or patch-wise comparison of the input image, and some generated image, as in Bergmann et al. (2018; 2019), where the structural dissimilarity (DSSIM, Wang et al. (2004)) between the input and its VAE reconstruction is used.\nAutoencoder-based methods thus provide a straightforward way of generating an image conditioned on the input image. In the GAN original framework, though, images are generated from random noise z ∼ p(z) and are not conditioned by an input. Schlegl et al. (2017) propose with AnoGAN to get the closest generated image to the input using gradient descent on z for an energy defined by:\nEAnoGAN = ||x−G(z)||1 + λ · ||fD(x)− fD(G(z))||1 (2)\nThe first term ensures that the generation G(z) is close to the input x. The second term is based on a distance between features of the input and the generated images, where fD(x) is the output of an intermediate layer of the discriminator. This term ensures that the generated image stays in the vicinity of the original dataset distribution." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 ADVERSARIAL PROJECTIONS", "text": "According to Zimmerer et al. (2018), the loss gradient with respect to x gives the direction towards normal data samples, and its magnitude could indicate how abnormal a sample is. In their work on anomaly identification, they use the loss gradient as an anomaly score.\nHere we propose to use the gradient of the loss to iteratively improve the observed x. We propose to link this method to the methodology of computing adversarial samples in Szegedy et al. (2014).\nAfter training a VAE on non-anomalous data, we can define a threshold T on the reconstruction loss Lr as in (Matsubara et al., 2018), such that a small proportion of the most improbable samples are identified as anomalies. We obtain a binary classifier defined by\nA(x) = { 1 if Lr(x) ≥ T 0 otherwise (3)\nOur method consists in computing adversarial samples of this classifier (Szegedy et al., 2014), that is to say, starting from a sample x0 with A(x0) = 1, iterate gradient descent steps over the input x, constructing samples x1, . . .xN , to minimize the energy E(x), defined as\nE(xt) = Lr(xt) + λ · ||xt − x0||1 (4)\nAn iteration is done by calculating xt+1 as\nxt+1 = xt − α · ∇xE(xt), (5)\nwhere α is a learning rate parameter, and λ is a parameter trading off the inclusion of xt in the normal manifold, given by Lr(xt), and the proximity between xt and the input x0, assured by the regularization term ||xt − x0||1." }, { "heading": "3.2 REGULARIZATION TERM", "text": "We model the anomalous images that we encounter as normal images in which a region or several regions of pixels are altered but the rest of the pixels are left untouched. To recover the best segmentation of the anomalous pixels from an anomalous image xa, we want to recover the closest image from the normal manifold xg . The term closest has to be understood in the sense that the smallest number of pixels are modified between xa and xg . In our model, we therefore would like to use the L0 distance as a regularization distance of the energy. Since the L0 distance is not differentiable, we use the L1 distance as an approximation." }, { "heading": "3.3 OPTIMIZATION IN INPUT SPACE", "text": "While in our method the optimization is done in the input space, in the previously mentioned AnoGAN, the search for the optimal reconstruction is done by iterating over z samples with the\nenergy defined in equation 2. Following the aforementioned analogy between a GAN generator G and a VAE decoder Dec, a similar approach in the context of a VAE would be to use the energy\n||x−Dec(z)||1 − λ · log p(z) (6) where the − log p(z) term has the same role as AnoGAN’s ||fD(x) − fD(G(z))||1 term, to ensure that Dec(z) stays within the learned manifold. We chose not to iterate over z in the latent space for two reasons. First, because as noted in Dai & Wipf (2019) and Hoffman & Johnson (2016), the prior p(z) is not always a good proxy for the real image of the distribution in the latent space q(z). Second, because the VAE tends to ignore some details of the original image in its reconstruction, considering that these details are part of the independent pixel noise allowed by the modeling of p(x|z) as a diagonal Gaussian, which causes its infamous blurriness. An optimization in latent space would have to recreate the high frequency structure of the image, whereas iterating over the input image space, and starting the descent on the input image x0, allows us to keep that structure and thus to obtain projections of higher quality." }, { "heading": "3.4 OPTIMIZING GRADIENT DESCENT", "text": "We observed that using the Adam optimizer (Kingma & Ba, 2015) is beneficial for the quality of the optimization. Moreover, to speed up the convergence and further preserve the aforementioned high frequency structure of the input, we propose to compute our iterative samples using the pixel-wise reconstruction error of the VAE. To explain the intuition behind this improvement, we will consider the inpainting task. In this setting, as in anomaly localization, a local perturbation is added on top of a normal image. However, in the classic inpainting task, the localization of the perturbation is known beforehand, and we can use the localization mask Ω to only change the value of the anomalous pixels in the gradient descent: xt+1 = xt − α · (∇xE(xt) Ω ) (7) where is the Hadamard product. For anomaly localization and blind inpainting, where this information is not available, we compute the pixel-wise reconstruction error which gives a rough estimate of the mask. The term∇xE(xt) is therefore replaced with∇xE(xt) (xt − fV AE(xt))2 ) in equation 5:\nxt+1 = xt − α · (∇xE(xt) (xt − fV AE(xt))2 ) (8) where fV AE(x) is the standard reconstruction of the VAE. Optimizing the energy this way, a pixel where the reconstruction error is high will update faster, whereas a pixel with good reconstruction will not change easily. This prevents the image to update its pixels where the reconstruction is already good, even with a high learning rate. As can be seen in appendix B, this method converges to the same performance as the method of equation 5, but with fewer iterations. An illustration of our method can be found in figure 2." }, { "heading": "3.5 STOP CRITERION", "text": "A standard stop criterion based on the convergence of the energy can efficiently be used. Using the adversarial setting introduced in section 3.1, we also propose to stop the gradient descent when a certain predefined threshold on the VAE loss is reached. For example, such a threshold can be chosen to be a quantile of the empirical loss distribution computed on the training set." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the proposed method for two different applications: anomaly segmentation and image inpainting. Both applications are interesting use cases of our method, where we search to reconstruct partially corrupted images, correcting the anomalies while preserving the uncorrupted image regions." }, { "heading": "4.1 UNSUPERVISED ANOMALY SEGMENTATION", "text": "In order to evaluate the proposed method for the task of anomaly segmentation, we perform experiments with the recently proposed MVTec dataset (Bergmann et al., 2019). This collection of datasets\nconsists of 15 different categories of objects and textures in the context of industrial inspection, each category containing a number of normal and anomalous samples.\nWe train our model on normal training samples and test it on both normal and anomalous test samples to evaluate the anomaly segmentation performance.\nWe perform experiments with three different baseline autoencoders: A “vanilla” variational autoencoder with decoder covariance matrix fixed to identity (Kingma & Welling, 2014), a variational autoencoder with learned decoder variance (Dai & Wipf, 2019), a “vanilla” deterministic autoencoder trained with L2 as reconstruction loss (L2AE) and a deterministic autoencoder trained with DSSIM reconstruction loss (DSAE), as proposed by Bergmann et al. (2018). For the sake of a fair comparison, all the autoencoder models are parameterized by convolutional neural networks with the same architecture, latent space dimensionality (set to 100), learning rate (set to 0.0001) and number of epochs (set to 300). The architecture details (layers, paddings, strides) are the same as described in Bergmann et al. (2018) and Bergmann et al. (2019). Similarly to the authors in Bergmann et al. (2019), for the textures datasets, we first subsample the original dataset images to 512 × 512 and then crop random patches of size 128 × 128 which are used to train and test the different models. For the object datasets, we directly subsample the original dataset images to 128 × 128 unlike in Bergmann et al. (2019) who work on 256 × 256 images, then we perform rotation and translation data augmentations. For all datasets we train on 10000 images.\nAnomaly segmentation is then computed by reconstructing the anomalous image and comparing it with the original. We perform the comparison between reconstructed and original with the DSSIM metric as it has been observed in Bergmann et al. (2018) that it provides better anomaly localization than L2 or L1 distances. For the gradient descent, we set the step size α := 0.5, L1 regularization weight λ := 0.05 and the stop criterion is achieved when a sample reconstruction loss is inferior to the minimum reconstruction loss over the training set.\nIn table 1 we show the AUROC (Area Under the Receiver Operating Characteristics) for different autoencoder methods, with different thresholds applied to the DSSIM anomaly map computed be-\ntween original and reconstructed images. Note that an AUROC of 1 expresses the best possible segmentation in terms of normal and anomalous pixels. For each autoencoder variant we compare the baseline reconstruction with the proposed gradient-based reconstruction (grad.). As in Bergmann et al. (2019) we observe that an overall best model is hard to identify, however we show that our method increases the AUC values for almost all autoencoder variants. Aggregating the results over all datasets and baselines, we report a mean improvement rate of 9.52%, with a median of 4.33%, a 25th percentile of 1.86%, and a 75th percentile of 15.86%. The histogram of the improvement rate for all datasets and baselines is provided in appendix F, as well as a short analysis.\nIn figure 3 we compare our anomaly segmentation with a baseline L2 autoencoder Bergmann et al. (2019) (L2AE) for a number of image categories. For all results in figure 3, we set the same threshold to 0.2 to the anomaly detection map given by the DSSIM metric. The visual results in figure 3 highlights an overall improvement of anomaly localization by our proposed iterative reconstruction (L2AE-grad). See appendix C for additional visual results of anomaly segmentation on remaining categories of MVTec dataset, and on remaining baseline models." }, { "heading": "4.2 INPAINTING", "text": "Image inpainting is a well known image reconstruction problem which consists of reconstructing a corrupted or missing part of an image, where the region to be reconstructed is usually given by a known mask. Many different approaches for inpainting have been proposed in the literature, such as anisotropic diffusion (Bertalmio et al., 2000), patch matching (Criminisi et al., 2004), context autoencoders (Pathak et al., 2016) and conditional variational autoencoders (Ivanov et al., 2019).\nIf we consider that the region to be reconstructed is not known beforehand, the problem is sometimes called blind inpainting (Altinel et al., 2018), and the corrupted part can be seen as an anomaly to be corrected.\nWe performed experiments with image inpainting on the CelebA dataset (Liu et al., 2015), which consists of celebrity faces. In figure 4 we compare the inpainting results obtained with a baseline VAE with learned variance (γ-VAE) and Resnet architecture, as described by Dai & Wipf (2019), with the same VAE model, augmented by our proposed gradient-based iterative reconstruction. Note that for the regular inpainting task, gradients are multiplied by the inpainting mask at each iteration (equation 7), while for the blind inpainting task, the mask is unknown. See appendix D for a comparison with a recent method based on variational autoencoders, proposed by Ivanov et al. (2019)." }, { "heading": "5 RELATED WORK", "text": "Baur et al. (2018) have used autoencoder reconstructions to localize anomalies in MRI scans, and have compared several variants using diverse per-pixel distances as well as perceptual metrics derived from a GAN-like architecture. Bergmann et al. (2018) use the structural similarity metric (Wang et al., 2004) to compare the original image and its reconstruction to achieve better anomaly localization, and also presents the SSIM autoencoder, which is trained directly with this metric.\nZimmerer et al. (2018) use the derivative of the VAE loss function with respect to the input, called the score. The amplitude of the score is supposed to indicate how abnormal a pixel is. While we agree that the gradient of the loss is an indication of an anomaly, we think that we have to integrate this gradient over the path from the input to the normal manifold to obtain meaningful information. We compare our results to score-based results for anomaly localization in appendix A.\nThe work that is the most related to ours is AnoGAN (Schlegl et al., 2017). We have mentioned above the differences between the two approaches, which, apart from the change in underlying architectures, boil down to the ability in our method to update directly the input image instead of searching for the optimal latent code. This enables the method to converge faster and above all to keep higher-frequency structures of the input, which would have been deteriorated if it were passed through the AE bottleneck. Bergmann et al. (2019) compare standard AE reconstructions techniques to AnoGAN, and observes that AnoGAN’s performances on anomaly localizations tasks are poorer than AE’s due to the mode collapse tendency of GAN architectures. Interestingly, updates on AnoGAN such as fast AnoGAN (Schlegl et al., 2019) or AnoVAEGAN (Baur et al., 2018) replaced the gradient descent search of the optimal z with a learned encoder model, yielding an approach very similar to the standard VAE reconstruction-based approaches, but with a reconstruction loss learned by a discriminator, which is still prone to mode collapse (Thanh-Tung et al., 2019)." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a novel method for unsupervised anomaly localization, using gradient descent of an energy defined by an autoencoder reconstruction loss. Starting from a sample under test, we iteratively update this sample to reduce its autoencoder reconstruction error. This method offers a way to incorporate human priors into what is the optimal projection of an out-of-distribution sample into the normal data manifold. In particular, we use the pixel-wise reconstruction error to modulate the gradient descent, which gives impressive anomaly localization results in only a few iterations. Using gradient descent in the input data space, starting from the input sample, enables us to overcome the autoencoder tendency to provide blurry reconstructions and keep normal high frequency structures. This significantly reduces the number of pixels that could be wrongly classified as defects when the autoencoder fails to reconstruct high frequencies. We showed that this method, which can easily be added to any previously trained autoencoder architecture, gives state-of-the-art results on a variety of unsupervised anomaly localization datasets, as well as qualitative reconstructions on an inpainting task. Future work can focus on replacing the L1-based regularization term with a Bayesian prior modeling common types of anomalies, and on further improving the speed of the gradient descent." }, { "heading": "A COMPARISON WITH ZIMMERER ET AL. (2019)", "text": "Zimmerer et al. (2019) proposed to perform anomaly localization using different scores derived from the gradient of the VAE loss. In particular, it has been shown that the product of the VAE reconstruction error with the gradient of the KL divergence was very informative for medical images. In table 2 we compare the pixel-wise anomaly detection AUROC of these different scores with our method. For all experiments, we use the same “vanilla” VAE as described in section 4.1.\nIt can be seen that other VAE-based methods using a single evaluation of the gradient are constantly outperformed by our method." }, { "heading": "B CONVERGENCE SPEED", "text": "In figure 5 we compare the number of iterations needed to reach convergence with our two proposals for gradient descent: Standard update as in equation 5 and Tuned update using a gradient mask computed with the VAE reconstruction error, as in equation 8. The model is a VAE with learned decoder variance (Dai & Wipf, 2019), trained on the Grid dataset (Bergmann et al., 2019). We compute the mean pixel-wise anomaly detection AUROC after each iteration on the test set.\nWe can see that the tuned method converges to the same performance as the standard method, with far fewer iterations." }, { "heading": "C ADDITIONAL ANOMALY SEGMENTATION RESULTS", "text": "D INPAINTING COMPARISON\nE ILLUSTRATION OF THE OPTIMIZATION PROCESS\nFigure 9 illustrates our method principle. We start with a defective input x0 whose reconstruction x̂0 does not necessarily lie on the normal data manifold. As the optimization process carries on, the optimized sample x0 and its reconstruction look more similar and get closer to the manifold. The regularization term of the energy function makes sure that the optimized sample stays close to the original sample." }, { "heading": "F DISTRIBUTION OF THE IMPROVEMENT RATE ON MVTEC AD", "text": "Figure 10 shows the distribution of the AUC improvement rate over all presented baselines and all datasets in MVTec AD using our gradient-based projection method.\nimprovement rate = AUCgrad −AUCbase\nAUCbase\n• 8.3% of data points are under the 0 value delimiting an increase or decrease in AUC due to our method, and 91.7% data points are over this value. Our method increases the AUC in a vast majority of cases.\n• The median is at 4.33%, the 25th percentile at 1.86%, and the 75th percentile at 15.86%." } ]
2,020
MAL DATA MANIFOLD FOR ANOMALY LOCALIZATION
SP:cf0db5624fc03cd71e331202c16808174b4a9ae7
[ "The paper proposes a type of recurrent neural network module called Long History Short-Term Memory (LH-STM) for longer-term video generation. This module can be used to replace ConvLSTMs in previously published video prediction models. It expands ConvLSTMs by adding a \"previous history\" term to the ConvLSTM equations that compute the IFO gates and the candidate new state. This history term corresponds to a linear combination of previous hidden states selected through a soft-attention mechanism. As such, it is not clear if there are significant differences between LH-STMs and previously proposed LSTMs with attention on previous hidden states. The authors propose recurrent units that include one or two History Selection (soft-attention) steps, called single LH-STM and double LH-STM respectively. The exact formulation of the double LH-STM is not clear from the paper. The authors then propose to use models with LH-STM units for longer term video generation. They claim that LH-STM can better reduce error propagation and better model the complex dynamics of videos. To support the claims, they conduct empirical experiments where they show that the proposed model outperforms previous video prediction models on KTH (up to 80 frames) and the BAIR Push dataset (up to 25 frames).", "This paper proposes a new LSTM architecture called LH-STM (and Double LH-STM). The main idea deals with having a history selection mechanism to directly extract what information from the past. The authors also propose to decompose the history and update in LH-STM into two networks called Double LH-STM. In experiments, the authors evaluate and compare their two architectures with previously proposed models. They show that their architecture outperforms previous in the PSNR, SSIM and VIF metrics." ]
While video prediction approaches have advanced considerably in recent years, learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions. To address this challenge, existing algorithms rely on extra supervision (e.g., action or object pose), motion flow learning, or adversarial training. In this paper, we propose a new recurrent unit, Long History Short-Term Memory (LH-STM). LH-STM incorporates long history states into a recurrent unit to learn longer range dependencies. To capture spatiotemporal dynamics in videos, we combined LH-STM with the Context-aware Video Prediction model (ContextVP). Our experiments on the KTH human actions and BAIR robot pushing datasets demonstrate that our approach produces not only sharper near-future predictions, but also farther into the future compared to the state-of-the-art methods.
[]
[ { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "arXiv preprint arXiv:1710.11252,", "year": 2017 }, { "authors": [ "Wonmin Byeon", "Qin Wang", "Rupesh Kumar Srivastava", "Petros Koumoutsakos" ], "title": "Contextvp: Fully context-aware video prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Víctor Campos", "Brendan Jou", "Xavier Giró-i Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip rnn: Learning to skip state updates in recurrent neural networks", "venue": "arXiv preprint arXiv:1708.06834,", "year": 2017 }, { "authors": [ "Jianpeng Cheng", "Li Dong", "Mirella Lapata" ], "title": "Long short-term memory-networks for machine reading", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "arXiv preprint arXiv:1609.01704,", "year": 2016 }, { "authors": [ "Emily Denton", "Robert Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emily L Denton" ], "title": "Unsupervised learning of disentangled representations from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Alex X Lee", "Sergey Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "arXiv preprint arXiv:1710.05268,", "year": 2017 }, { "authors": [ "Salah El Hihi", "Yoshua Bengio" ], "title": "Hierarchical recurrent neural networks for long-term dependencies", "venue": "In Advances in neural information processing systems,", "year": 1996 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Aistats,", "year": 2010 }, { "authors": [ "Tao Gui", "Qi Zhang", "Lujun Zhao", "Yaosong Lin", "Minlong Peng", "Jingjing Gong", "Xuanjing Huang" ], "title": "Long short-term memory with dynamic skip connections", "venue": "arXiv preprint arXiv:1811.03873,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Yoshua Bengio", "Paolo Frasconi", "Jürgen Schmidhuber" ], "title": "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies", "venue": null, "year": 2001 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Jan Koutnik", "Klaus Greff", "Faustino Gomez", "Juergen Schmidhuber" ], "title": "A clockwork rnn", "venue": "arXiv preprint arXiv:1402.3511,", "year": 2014 }, { "authors": [ "Ivan Laptev", "Barbara Caputo" ], "title": "Recognizing human actions: a local svm approach", "venue": "In null,", "year": 2004 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "William Lotter", "Gabriel Kreiman", "David Cox" ], "title": "Deep predictive coding networks for video prediction and unsupervised learning", "venue": "arXiv preprint arXiv:1605.08104,", "year": 2016 }, { "authors": [ "Michael Mathieu", "Camille Couprie", "Yann LeCun" ], "title": "Deep multi-scale video prediction beyond mean square error", "venue": "arXiv preprint arXiv:1511.05440,", "year": 2015 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "arXiv preprint arXiv:1707.05589,", "year": 2017 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and optimizing lstm language models", "venue": "arXiv preprint arXiv:1708.02182,", "year": 2017 }, { "authors": [ "Asier Mujika", "Florian Meier", "Angelika Steger" ], "title": "Fast-slow recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Daniel Neil", "Michael Pfeiffer", "Shih-Chii Liu" ], "title": "Phased lstm: Accelerating recurrent network training for long or event-based sequences", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Marc Oliu", "Javier Selva", "Sergio Escalera" ], "title": "Folded recurrent neural networks for future video prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "arXiv preprint arXiv:1312.6026,", "year": 2013 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning complex, extended sequences using the principle of history compression", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Hamid R Sheikh", "Alan C Bovik" ], "title": "A visual information fidelity approach to video quality assessment", "venue": "In The First International Workshop on Video Processing and Quality Metrics for Consumer Electronics,", "year": 2005 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Rohollah Soltani", "Hui Jiang" ], "title": "Higher order recurrent neural networks", "venue": "arXiv preprint arXiv:1605.00064,", "year": 2016 }, { "authors": [ "Marijn F Stollenga", "Wonmin Byeon", "Marcus Liwicki", "Juergen Schmidhuber" ], "title": "Parallel multidimensional lstm, with application to fast biomedical volumetric image segmentation", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction", "venue": "arXiv preprint arXiv:1706.08033,", "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Yuliang Zou", "Sungryull Sohn", "Xunyu Lin", "Honglak Lee" ], "title": "Learning to generate long-term future via hierarchical prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Carl Vondrick", "Antonio Torralba" ], "title": "Generating the future with adversarial transformers", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yunbo Wang", "Mingsheng Long", "Jianmin Wang", "Zhifeng Gao", "S Yu Philip" ], "title": "Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yunbo Wang", "Zhifeng Gao", "Mingsheng Long", "Jianmin Wang", "Philip S Yu" ], "title": "Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning", "venue": "arXiv preprint arXiv:1804.06300,", "year": 2018 }, { "authors": [ "Yunbo Wang", "Lu Jiang", "Ming-Hsuan Yang", "Li-Jia Li", "Mingsheng Long", "Li Fei-Fei" ], "title": "Eidetic 3d lstm: A model for video prediction and beyond. 2018b", "venue": null, "year": 2018 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Dirk Weissenborn", "Oscar Täckström", "Jakob Uszkoreit" ], "title": "Scaling autoregressive video models", "venue": "arXiv preprint arXiv:1906.02634,", "year": 2019 }, { "authors": [ "Nevan Wichers", "Ruben Villegas", "Dumitru Erhan", "Honglak Lee" ], "title": "Hierarchical long-term video prediction without supervision", "venue": "arXiv preprint arXiv:1806.04768,", "year": 2018 }, { "authors": [ "SHI Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Julian Georg Zilly", "Rupesh Kumar Srivastava", "Jan Koutník", "Jürgen Schmidhuber" ], "title": "Recurrent highway networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Wang" ], "title": "2018a) for long-term gradient propagation. Our proposed model concatenates outputs of the previous and current LSTM layers", "venue": "(Melis et al.,", "year": 2017 }, { "authors": [], "title": "λpl = 1). C EVALUATION OF STOCHASTIC MODELS SAVP-VAE numbers in the paper by Lee et al. (2018) are obtained by first generating 100 output samples per input and then selecting the best generated sample for each input based on the highest", "venue": null, "year": 2018 }, { "authors": [ "Villegas" ], "title": "2017a). Top row shows the ground truth, and the rest is the prediction results. The model is trained for 10 frames and predicted for 40 frames given 10 frames. When motion is larger, all models fail to produce reasonable predictions", "venue": null, "year": 2017 }, { "authors": [ "Villegas" ], "title": "2017a). Top row shows the ground truth, and the rest is the prediction results. The model is trained for 10 frames and predicted for 25 frames given 5 frames. When motion is larger, all models fail to produce reasonable predictions", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "While video prediction approaches have advanced considerably in recent years, learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions. To address this challenge, existing algorithms rely on extra supervision (e.g., action or object pose), motion flow learning, or adversarial training. In this paper, we propose a new recurrent unit, Long History Short-Term Memory (LH-STM). LH-STM incorporates long history states into a recurrent unit to learn longer range dependencies. To capture spatiotemporal dynamics in videos, we combined LH-STM with the Context-aware Video Prediction model (ContextVP). Our experiments on the KTH human actions and BAIR robot pushing datasets demonstrate that our approach produces not only sharper near-future predictions, but also farther into the future compared to the state-of-the-art methods." }, { "heading": "1 INTRODUCTION", "text": "Learning the dynamics of an environment and predicting consequences in the future has become an important research problem. A common task is to train a model that accurately predicts pixel-level future frames conditioned on past frames. It can be utilized for intelligent agents to guide them to interact with the world, or for other video analysis tasks such as activity recognition. An important component of designing such models is how to effectively learn good spatio-temporal representations from video frames. The Convolutional Long Short-Term Memory (ConvLSTM) network (Xingjian et al., 2015) has been a popular model architecture choice for video prediction. However, recent stateof-the-art approaches produce high-quality predictions only for one or less then ten frames (Lotter et al., 2016; Villegas et al., 2017a; Byeon et al., 2018). Learning to predict long-term future video frames remains challenging due to 1) the presence of complex dynamics in high-dimensional video data, 2) prediction error propagation over time, and 3) inherent uncertainty of the future.\nMany recent works (Denton & Fergus, 2018; Babaeizadeh et al., 2017; Lee et al., 2018) focus on the third issue by introducing stochastic models; this issue is a crucial challenge for long-term prediction. However, the architectures currently in use are not sufficiently powerful and efficient for long-term prediction, and this is also an important but unsolved problem. The model needs to extract important information from spatio-temporal data and retain this information longer into the future efficiently. Otherwise, its uncertainty about the future will increase even if the future is completely predictable given the past. Therefore, in this paper, we attempt to address the issue of learning complex dynamics of videos and minimizing long-term prediction error by fully observing the history.\nWe propose a novel modification of the ConvLSTM structure, Long History Short-Term Memory (LH-STM). LH-STM learns to interconnect history states and the current input by a History SoftSelection Unit (HistSSel) and double memory modules. The weighted history states computed by our HistSSel units are combined with the history memory and then used to update the current states by the update memory. The proposed method brings the power of higher-order RNNs (Soltani & Jiang, 2016) to ConvLSTMs, which have been limited to simple recurrent mechanisms so far. The HistSSel unit acts as a short-cut to the history, so the gradient flow in the LSTM is improved. More powerful RNNs are likely to be necessary to solve the hard and unsolved problem of long-term video prediction, which is extremely challenging for current architectures. Moreover, by disentangling the history and update memories, our model can fully utilize the long history states.\nThis structure can better model long-term dependencies in sequential data. In this paper, the proposed modification is integrated into the ConvLSTM-based architectures to solve the long-term video prediction problem: Context-aware Video Prediction (ContextVP) model. The proposed models can fully leverage long-range spatio-temporal contexts in real-world videos. Our experiments on the KTH human actions and the BAIR robot pushing datasets show that our model produces sharp and realistic predictions for more frames into the future compared to recent state-of-the-art long-term video prediction methods." }, { "heading": "2 RELATED WORK", "text": "Learning Long-Term Dependencies with Recurrent Neural Networks (RNN): While Long ShortTerm Memory (LSTM) has been successful for sequence prediction, many recent approaches aim to capture longer-term dependencies in sequential data. Several works have proposed to allow dynamic recurrent state updates or to learn more complex transition functions. Chung et al. (2016) introduced the hierarchical multiscale RNN that captures a hierarchical representation of a sequence by encoding multiple time scales of temporal dependencies. Koutnik et al. (2014) modified the standard RNN to a Clockwork RNN that partitions hidden units and processes them at different clock speeds. Neil et al. (2016) introduced a new time gate that controls update intervals based on periodic patterns. Campos et al. (2017) proposed an explicit skipping module for state updates. Zilly et al. (2017) increased the recurrent transition depth with highway layers. Fast-Slow Recurrent Neural Networks (Mujika et al., 2017) incorporate ideas from both multiscale (Schmidhuber, 1992; El Hihi & Bengio, 1996; Chung et al., 2016) and deep transition (Pascanu et al., 2013; Zilly et al., 2017) RNNs. The advantages of the above approaches are efficient information propagation through time, better long memory traces, and generalization to unseen data.\nAlternative solutions include the use of history states, an attention model or skip connections. Soltani & Jiang (2016) investigated a higher order RNN to aggregate more history information and showed that it is beneficial for long range sequence modeling. Cheng et al. (2016) deployed an attention mechanism in LSTM to induce relations between input and history states. Gui et al. (2018) incorporated into an LSTM, dynamic skip connections and reinforcement learning to model long-term dependencies. These approaches use the history states in a single LSTM by directly adding more recurrent connections or adding an attention module in the memory cell. These models are used for one dimensional sequence modeling, whereas our proposed approach separates the history and update memories that learn to encode the long-range relevant history states. Furthermore, our approach is more suitable for high-dimensional (e.g., video) prediction tasks.\nVideo Prediction: The main issue in long-term pixel-level video prediction is how to capture longterm dynamics and handle uncertainty of the future while maintaining sharpness and realism. Oh et al. (2015) introduced action-conditioned video prediction using a Convolutional Neural Network (CNN) architecture. Villegas et al. (2017b) and Wichers et al. (2018) focused on a hierarchical model to predict long-term videos. Their model estimates high-level structure before generating pixel-level predictions. However, the approach by Villegas et al. (2017b) requires object pose information as ground truth during training. Finn et al. (2016) used ConvLSTM to explicitly model pixel motions. To generate high-quality predictions, many approaches train with an adversarial loss (Mathieu et al., 2015; Wichers et al., 2018; Vondrick et al., 2016; Vondrick & Torralba, 2017; Denton et al., 2017; Lee et al., 2018). Weissenborn et al. (2019) introduced local self-attention on videos directly for a large scale video precessing. Another active line of investigation is to train stochastic prediction models using VAEs (Denton & Fergus, 2018; Babaeizadeh et al., 2017; Lee et al., 2018). These models predict plausible futures by sampling latent variables and produce long-range future predictions.\nThe Spatio-temporal LSTM (Wang et al., 2017; 2018a) was introduced to better represent the dynamics of videos. This model is able to learn spatial and temporal representations simultaneously. Byeon et al. (2018) introduced a Multi-Dimensional LSTM-based approach (Stollenga et al., 2015) for video prediction. It contains directional ConvLSTM-like units that efficiently aggregate the entire spatio-temporal contextual information. Wang et al. (2018b) recently proposed a memory recall function with 3D ConvLSTM. This work is the most related to our approach. It uses a set of cell states with an attention mechanism to capture the long-term frame interaction similar to the work of Cheng et al. (2016). With 3D-convolutional operations, the model is able to capture short-term and long-term information flow. In contrast to this work, the attention mechanism in our model is\nused for a set of hidden state. We also disentangle the history and update memory cells to better memorize and restore the relevant information. By integrating a double memory LH-STM into the context-aware video prediction (ContextVP) model (Byeon et al., 2018), our networks can capture the entire spatio-temporal context for a long-range video sequence." }, { "heading": "3 METHOD", "text": "In this section, we first describe the standard ConvLSTM architecture and then introduce the LH-STM. Finally, we explain the ConvLSTM-based network architectures for multi-frame video prediction using the Context-aware Video Prediction model (ContextVP) (Byeon et al., 2018)." }, { "heading": "3.1 CONVOLUTIONAL LSTM", "text": "Let Xn1 = {X1, ..., Xn} be an input sequence of length n. Xk ∈ Rh×w×c is the k-th frame, where k ∈ {1, ..., n}, h is the height, w the width, and c the number of channels. For the input frame Xk, a ConvLSTM unit computes the current cell and hidden states (Ck, Hk) given the cell and hidden states from the previous frame, (Ck−1, Hk−1):\nCk, Hk = ConvLSTM(Xk, Hk-1, Ck-1), (1)\nby computing the input, forget, output gates ik, fk, ok, and the transformed cell state Ĉk:\nik = σ(Wi ∗Xk +Mi ∗Hk-1 + bi), fk = σ(Wf ∗Xk +Mf ∗Hk-1 + bf ), ok = σ(Wo ∗Xk +Mo ∗Hk-1 + bo), Ĉk = tanh(Wĉ ∗Xk +Mĉ ∗Hk-1 + bĉ), Ck = fk Ck-1 + ik Ĉk, Hk = ok tanh(Ck),\n(2)\nwhere σ is the sigmoid function, W and M are 2D convolutional kernels for input-to-state and state-to-state transitions, (∗) is the convolution operation, and ( ) is element-wise multiplication. The size of the weight matrices depends on the size of convolutional kernel and the number of hidden units." }, { "heading": "3.2 LONG HISTORY SHORT-TERM MEMORY (LH-STM)", "text": "LH-STM is an extension of the standard ConvLSTM model by integrating a set of history states into the LSTM unit. The history states include the spatio-temporal context of each frame in addition to the pixel level information in the frame itself. Figure 1 illustrates the differences between a standard RNN, Higher-Order RNN (Soltani & Jiang, 2016), and our proposed model (Double and Single LH-STM).\nHistory Soft-Selection Unit (HistSSel): The HistSSel unit computes the relationship between the recent history and the earlier ones using dot-product similar to (Vaswani et al., 2017)1. This mechanism can be formulated as SoftSel(Q,K, V ) = softmax(WQQ ·WQK) ·WQV . It consists of queries (Q), keys (K) and values (V). It computes the dot products of the queries and the keys; and then applies the a softmax function. Finally, the values (V) are weighted by the outputs of softmax function. The queries, keys, and values can be optionally transformed by the WQ, WK , and WV matrices.\nUsing this mechanism, HistSSel computes the relationship between the last hidden state Hk-1 and the earlier hidden states Hk-m:k-2 at time step k (See Figure 2). Hk-m:k-2 is the set of previous hidden states, (Hk-m, Hk-m-1, · · ·Hk-3, Hk-2). We will show in Section 4.2 the benefit of using history states versus using the past input frames directly. The history soft-selection mechanism can be formulated as follows:\nHistSSel(Hk-1, Hk-m:k-2) = n∑\ni=2\nsoftmaxi(H̃ Q k-1 · H̃ K k-i) · H̃Vk-i,\nH̃ji =W j i Hi + b j i , j ∈ {Q,K, V }. 2\n(3)\nSingle LH-STM: A simple way to employ HistSSel in ConvLSTM (Equation 1) is to add HistSSel(Hk-1, Hk-m:k-2) in addition to the input and the previous states (Xk, Hk-1, Ck-1) in the Equation 6 Figure 1c shows a diagram of the computation. This direct extension is named Single LH-STM: Hk = SingleConvLSTM(Xk, Hk-1, Ck-1,HistSSel(Hk-1, Hk-m:k-2)).\nik = σ(Wi ∗Xk +Mi ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + bi), fk = σ(Wf ∗Xk +Mf ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + bf ), ok = σ(Wo ∗Xk +Mo ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + bo) Ĉk = tanh(Wĉ ∗Xk +Mĉ ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + bĉ), Ck = ik Ĉk + fk Ck-1, Hk = ok tanh(Ck).\n(4)\nDouble LH-STM: To effectively learn dynamics from the history states, we propose Double LH-STM. It contains two ConvLSTM blocks, History LSTM (H-LSTM) and Update LSTM (U-LSTM). The goal of the Double LH-STM is to explicitly separate the long-term history memory and the update memory. By disentangling these, the model can better encode complex long-range history and keep track of the their dependencies. Figure 1d illustrates a diagram of Double LH-STM.\n1The purpose of this unit is to compute the importance of each history state. While we used a selfattention-like unit in this paper, this can be achieved with other common layers as well, e.g., fully connected or convolutional layers.\n2bji can be omitted.\nThe H-LSTM block explicitly learns the complex transition function from the (possibly entire) set of past hidden states, Hk-m:k-1, 1 < m < k. If m = k, H-LSTM incorporates the entire history up to the time step k − 1. The U-LSTM block updates the states Hk and Ck for the time step k, given the input Xk, previous cell state Ck−1, and the output of the H-LSTM, H ′k−1.\nThe History-LSTM (H-LSTM) and Update-LSTM (U-LSTM) can be formulated as:\nH-LSTM\ni′k-1 = σ(M ′ i ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + b′i), f ′k-1 = σ(M ′ f ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + b′f ), o′k-1 = σ(M ′ o ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + b′o)\nĈ′k-1 = tanh(M ′ ĉ ∗Hk-1 + HistSSel(Hk-1, Hk-m:k-2) + b′ĉ),\nC′k-1 = f ′ k−1 C′k-2 + i′k−1 Ĉ′k−1, H ′k-1 = o ′ k-1 tanh(c′k-1).\n(5)\nU-LSTM\nik = σ(Wi ∗Xk +Mi ∗H ′k-1 + bi), fk = σ(Wf ∗Xk +Mf ∗H ′′k-1 + bf ), ok = σ(Wo ∗Xk +Mo ∗H ′k-1 + bo),\nĈk = tanh(Wĉ ∗Xk +Mĉ ∗H ′′k-1 + bĉ),\nCk = fk Ck-1 + ik Ĉk, Hk = ok tanh(Ck).\n(6)" }, { "heading": "3.3 IMPLEMENTATION DETAILS", "text": "Model Architectures: ConvLSTM (Xingjian et al., 2015; Stollenga et al., 2015) is a popular building block for spatio-temporal sequence forecasting problems. In this paper, we focused on the recent state-of-the-art ConvLSTM-based architectures proposed by Byeon et al. (2018): the context-aware video prediction model (ContextVP-4). It can capture sufficient spatio-temporal context for video prediction and achieves state-of-the-art results on KTH human actions and BAIR robot pushing dataset. In this paper, the ConvLSTM block is replaced with LH-STM. Additionally, we exploit two types of skip connections to avoid the problem of vanishing gradients (Hochreiter et al., 2001) and allow long-term prediction: (a) between previous and the current recurrent layers (blue dotted line in Figure 9) and (b) across layers (green solid line in Figure 9). See Appendix B for the effectiveness of both skip connections.\nContextVP-4 (Byeon et al., 2018) consists of 4 layers of five directional Parallel Multi-Dimensional LSTM (PMD) units (Stollenga et al., 2015) along the h+, h−, w+, w−, and t− recurrence directions. A PMD unit along the time direction (t−) is mathematically equivalent to a standard ConvLSTM (Equation 6). By using LSTM connectivities across the temporal and spatial dimensions, each processing layer covers the entire context in a video. Following Byeon et al. (2018), we also included weighted blending, directional weight sharing (DWS), and two skip connections between the layers 1 - 3 and 2 - 4 (see Figure 9). See Appendix B for details of the network architectures.\nLoss: The loss function to train the networks is the combination ofL1-norm, L2-norm, and perceptual loss (Johnson et al., 2016; Zhang et al., 2018):\nL(Y, X̂) = λ1L1(Y, X̂) + λ2L2(Y, X̂) + λplLpl(Y, X̂), (7)\nwhere Y and X̂ are the target and the predicted frames, respectively. λ is a weight for each function. Lp-norm is defined as Lp(Y, X̂) = ||Y − X̂||p. p = 1 for L1-norm, and p = 2 for L2-norm. The perceptual loss computes the cosine distance between the feature maps extracted from a VGG-16 network pre-trained on the ImageNet dataset (Simonyan & Zisserman, 2014) as Lpl(y, x̂) = 1− 1l ∑ l 1 hl×wl ∑ hl,wl(φ(y)l · φ(x̂)l), where φ(y)l and φ(x̂)l are the feature maps of the target and the predicted frames Y and X̂ , respectively at the layer l. The size of the feature map at layer l is hl × wl." }, { "heading": "4 EXPERIMENTS", "text": "We evaluated our LH-STM model on the KTH human actions (Laptev et al., 2004) and BAIR actionfree robot pushing (Ebert et al., 2017) datasets. Videos in the KTH human actions dataset consist of six human actions including walking, jogging, running, boxing, hand waving, and hand clapping. The BAIR action-free robot pushing dataset includes randomly moving robotic arms, which push objects on a table. The videos have a resolution of 64× 64 pixels. We show quantitative comparisons\nby computing frame-wise Structural Similarity (SSIM) (Wang et al., 2004), Peak Signal-to-Noise Ratio (PSNR), Visual Information Fidelity (VIF) (Sheikh & Bovik, 2005), and LPIPS (Zhang et al., 2018) between the ground truth and generated video sequences." }, { "heading": "4.1 KTH HUMAN ACTIONS DATASET", "text": "We follow the experimental setup of Lee et al. (2018). We evaluate our methods on 128× 128 pixels for 40 frame prediction (Table 2) We train all models to predict the next 10 frames, given 10 prior ones. For testing, all models recursively predict 40 frames.\nOur Double LH-STM outperforms all competing methods on all metrics including the base model (ContextVP). Our Double LH-STM model consistently outperforms other methods for the 40 predicted frames. The improvement is larger for longer-term frame prediction into the future (40 frames). Qualitative results can be found in Figure 7 and Appendix E (Figure 14 and Figure 15). These results show that motion and human shape predicted by Double LH-STM are the closest to the ground truth.\nOur models are compared with FRNN (Oliu et al., 2018), PredRNN (Wang et al., 2017), PredRNN++ (Wang et al., 2018a), E3D-LSTM (Wang et al., 2018b), SAVP-VAE and SAVP-Deterministic models (Lee et al., 2018)3. Our model outperforms all other methods." }, { "heading": "4.2 STATE HISTORY VS INPUT HISTORY", "text": "The proposed LH-STM models use past hidden states. Alternatively, input frames can be directly applied to LH-STM as illustrated in Figure 3. We compared the performance of using input and state history with the same LH-STM models and the same model size in Table 1 on the KTH human actions dataset. Figure 7 shows a qualitative comparison of using state history (Double LH-STM) and input history (Double, input history). The results show the importance of using history states with LH-STM and that the state history is more suitable for videos." }, { "heading": "4.3 LONGER PREDICTION", "text": "To test the performance of longer-term prediction (more than 40 frames), we additionally trained the KTH human actions dataset with a resolution of 64× 64 pixels4. Figure 4 and Figure 5 compares the performance of 80 frame prediction with Double LH-STM and two state-of-the-art models (3SAVPVAE and SAVP-Deterministic (Lee et al., 2018)) on all action classes and three action classes only (hand waving, hand clapping, and boxing), respectively. We compare only these three action classes with all action classes to show that the high performance at longer frames is not due to background copy. The human in other action class videos (running, jogging, and walking) may disappear after certain frames. Additionally, Figure 13 shows the performance comparisons of each action class. The experimental setup is the same as the experiments of a resolution 128 × 128 (Section 4.1). From these comparisons, we can observe that Double LH-STM results are generally better than\n3 We note that Lee et al. (2018) reported the performance of VAE+GAN, VAE only, GAN only, and of their deterministic models. We select their best (VAE only) and the most comparable (deterministic) models to compare against. To evaluate the VAE model, we take 100 random input samples for each video following Lee et al. (2018). The reported numbers in this paper are for their method’s “median” output among the 100 randomly generated ones, compared to the ground truth. Please see Appendix C for the discussion of this evaluation setup\n4Due to lack of GPU memory, we could not use the original resolution for 80 frame prediction.\nSAVP-Deterministic and -VAE on all metrics. These also show that our model is not just copying the background when predicting more than 40 frames.\n0 20 40 60 80 Time steps\n25\n26\n27\n28\n29\n30\n31\n32\nPS NR\nDouble LH-STM (ours) (27.53) SAVP-Det (Lee, 2018) (25.65) SAVP-VAE (Lee, 2018) (25.0)\n0 20 40 60 80 Time steps\n0.60\n0.65\n0.70\n0.75\n0.80\n0.85\n0.90\nSS IM\nDouble LH-STM (ours) (0.775) SAVP-Det (Lee, 2018) (0.717) SAVP-VAE (Lee, 2018) (0.696)\n0 20 40 60 80 Time steps\n0.725\n0.750\n0.775\n0.800\n0.825\n0.850\n0.875\n0.900\n0.925\nLP IP\nS\nDouble LH-STM (ours) (0.799) SAVP-Det (Lee, 2018) (0.802) SAVP-VAE (Lee, 2018) (0.78)\nFigure 4: Per-frame comparisons on the KTH human actions dataset (all actions, 64× 64, 80 frame prediction). (·): averaged score\n0 20 40 60 80 Time steps\n22\n24\n26\n28\n30\nPS NR\nDouble LH-STM (ours) (25.01) SAVP-Det (Lee, 2018) (23.06) SAVP-VAE (Lee, 2018) (22.68)\n0 20 40 60 80 Time steps\n0.60\n0.65\n0.70\n0.75\n0.80\n0.85 0.90 SS IM Double LH-STM (ours) (0.748) SAVP-Det (Lee, 2018) (0.681) SAVP-VAE (Lee, 2018) (0.667)\n0 20 40 60 80 Time steps\n0.750\n0.775\n0.800\n0.825\n0.850\n0.875\n0.900\n0.925\nLP IP\nS\nDouble LH-STM (ours) (0.813) SAVP-Det (Lee, 2018) (0.8) SAVP-VAE (Lee, 2018) (0.795)\nFigure 5: Per-frame comparisons on the KTH human actions dataset (‘boxing’, ‘waving’, and ‘clapping’ actions, 64× 64, 80 frame prediction). (·): averaged score" }, { "heading": "4.4 BAIR ROBOT PUSHING DATASET", "text": "Our experimental setup follows that of Lee et al. (2018). All models are conditioned on five frames to predict the next ten frames 5. The models are tested to predict 25 frames. We evaluated our LH-STM models (Single and Double) and compared them with the base model (ContextVP), the 3SAVP-VAE, and SAVP-Deterministic models (Lee et al., 2018) in Table 3. Our Double LH-STM achieved better PSNR and SSIM scores than the SAVP-Deterministic and SAVP-VAE models. It can be seen from the per-frame prediction results (Figure 6), that the performance LH-STM is higher over time, especially on the PSNR and SSIM metrics. On the LPIPS metric, the performance of Double LH-STM is comparable to SAVP-Deterministic and better than SAVP-VAE. Qualitative results can be found in Appendix E (Figure 17 and Figure 18). This dataset contains random robot motion which makes long-term prediction extremely hard with deterministic models. However, the overall motion prediction with Double LH-STM and SAVP-Deterministic is still reasonable compared to SAVP-VAE.\n5 Lee et al. (2018) used two input frames. We increase the number of input frames as motion in two frames is almost not visible. We increased the number of input frames to provide enough history to the model. We also re-trained the models of Lee et al. (2018) with five input frames." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have introduced a new RNN structure, LH-STM, which incorporates long history states into the recurrent unit to learn longer range dependencies. This module is combined with the ContextVP architecture allowing context aware long-term video prediction. The resulting model produces more realistic predictions for a longer period of time compared to the recent state-of-the-art methods. The proposed approach can easily be extended to standard LSTM or Gated Recurrent Units (GRU) and applied to tasks that require a long-term memory trace, e.g., language modeling or speech recognition." }, { "heading": "A NETWORK DETAILS", "text": "Each layer of ContextVP-4 contains five Parallel Multi-Dimensional LSTM (PMD) units (see Figure 8). The parameters of opposite directions are shared. We followed the same network architecture as Byeon et al. (2018) except for the number of neurons.\nWeighted Blending: This layer learns the relative importance of each direction during training.\nSk = [H h+ k , H h− k , H w+ k , H w− k , H t− k ],\nMk = f(Wk · Sk + bk), (8)\nwhere H is the output of the PMD units from h+, h−, w+, w−, and t− directions, and W is the weight matrix.\nDirectional Weight-Sharing: Weights and biases of the PMD units in opposite directions (i.e., the h+ and h− directions, w+ and w− directions) are shared.\nSkip connection in a recurrent block (R-Skip): A skip connection is commonly used in LSTM layers (Melis et al., 2017; Merity et al., 2017; Wang et al., 2018a) for long-term gradient propagation. Our proposed model concatenates outputs of the previous and current LSTM layers (see Figure 9).\nGated skip connection across layers (L-Skip): Unlike in (Byeon et al., 2018), We added the gated skip connection across layers. In our model, the multiplicative gate is added to control the flow of information across layers. It stabilizes learning as depth increases. A layer is formulated as:\nY = Hl · Tl +Xl−i · (1− Tl), i < l, (9)\nwhere Hl = Xl ·WHl + bH and Tl = σ(Xl ·WTl + bT ). Xl is the input at the lth layer, T is a gate, and Hl is the transformation of the input at the layer l.\nWe adjusted the number of neurons, so all models have similar model size (≈ 17M). LH-STM Double contains 64-64-128-128 neurons for the input size of 64× 64 pixels and 64-128-128-128 for 128× 128. LH-STM Single contains 128-128-128-128 neurons for the input size of 64× 64 pixels and 64-128-128-256 for 128× 128. The initial learning rate is 1e − 4 found by grid search between 1e − 3 and 1e − 5. The learning rate is decayed every 10 epochs with the decay rate 0.95. Weights are initialized using the Xavier’s normalized initializer (Glorot & Bengio, 2010) and the recurrent states are all initialized to zero." }, { "heading": "B ABLATION STUDY", "text": "We performed a series of ablation studies to evaluate the importance of each component of our proposed model 1) RNN type (standard ConvLSTM, Single LH-STM, or Double LH-STM); 2) different losses (L1, L2, and the perceptual loss); adding skip connections 3) in recurrent blocks; and 4) across layers (see Section 3.3 for details). We evaluated performance on the KTH human actions dataset with a resolution of 64 × 64 pixels. The weights for the loss, λ1, λ2, and λpl, are tested between zero to one. The results of this experiment are summarized in Table 4. We found that performance improves by adding skip connections in recurrent blocks and across layers. Double LH-STM outperforms the standard ConvLSTM and Single LH-STM. Combination of L1, L2, and Lpl also improves the performance (with λ1 = 1, λ2 = 1, λpl = 1)." }, { "heading": "C EVALUATION OF STOCHASTIC MODELS", "text": "SAVP-VAE numbers in the paper by Lee et al. (2018) are obtained by first generating 100 output samples per input and then selecting the best generated sample for each input based on the highest score between the output and the ground truth. This evaluation measures the best prediction the model can do, given 100 tries. However, it does not provide an overall picture of the generations from the model. In real world scenarios, the ground truth is not available, so the ‘best’ samples cannot be picked as done in such evaluations. Therefore, we re-computed the scores for stochastic models based on their ‘median’ output sample (instead of best) among the 100 randomly generated ones, compared to the ground truth. This strategy measures how well the model can be expected to perform on average, and so it is a more representative score." }, { "heading": "D EVALUATION WITH LARGE AND SMALL MOTION", "text": "We split the video samples into five bins by motion magnitude computed as averaged L2 norm between target frames similar to the evaluations by Villegas et al. (2017a). Figure 12 shows the comparisons on the KTH human action dataset, and ?? shows on BAIR action free robot pushing dataset. Overall, across all metrics, all models perform worse when the motion is larger. For PSNR and SSIM, Double LH-STM achieves the best performance except for the largest motion. For LPIPS, Double LH-STM performs better for small motions (first two bins). SAVP-Deterministic performs the best on this metric for larger motions." }, { "heading": "E QUALITATIVE RESULTS", "text": "Figure 14 and Figure 15 show prediction results on the KTH human actions dataset, and Figure 17 and Figure 18 presents the results on the BAIR action free robot pushing dataset. Figure 16 and Figure 19 include the samples with one of the largest motions for both datasets. When motion is larger, all models fail to produce reasonable predictions, whether they are deterministic or stochastic." }, { "heading": "F RETRAINING PREDRNN++", "text": "As mentioned in the paper, we re-trained PredRNN++ on the KTH human actions dataset using the provided code by Wang et al. (2018a) (https://github.com/Yunbo426/predrnn-pp). We conducted grid hyper-parameter search for the learning rate and the patch size. We found in the code that there is an option to divide the input frames into smaller patches. The learning rate search was performed in the range between 1e− 3 and 1e− 4. The range of patch size was 1, 2, 4 for both frame resolutions (64× 64 and 128× 128 pixels). The reported results in the paper are for a learning rate of 1e− 4 and the patch size 1 (full resolution)." } ]
2,019
null
SP:69da1cecdf9fc25a9e6263943a5396b606cdcfef
[ "In this work, the authors show that the sequence of self-attention and feed-forward layers within a Transformer can be interpreted as an approximate numerical solution to a set of coupled ODEs. Based on this insight, the authors propose to replace the first-order Lie-Trotter splitting scheme by the more accurate, second-order Strang splitting scheme. They then present experimental results that indicate an improved performance of their Macaron Net compared to the Transformer and argue that this is due to the former being a more accurate numerical solution to the underlying set of ODEs.", "The paper points out a formal analogy between transformers and an ODE modelling multi-particle convection (the feed-forward network) and diffusion (the self-attention head). The paper then adapts the Strang-Marchuk splitting scheme for solving ODEs to construct a slightly different transformer architecture: “FFN of Attention of FFN”, instead of “FFN of Attention”. The new architecture, refered to as a Macaron-Net, yields better performance in a variety of experiments." ]
The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system. In particular, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles’ movement in the space using the Lie-Trotter splitting scheme and the Euler’s method. Given this ODE’s perspective, the rich literature of numerical analysis can be brought to guide us in designing effective structures beyond the Transformer. As an example, we propose to replace the Lie-Trotter splitting scheme by the Strang-Marchuk splitting scheme, a scheme that is more commonly used and with much lower local truncation errors. The Strang-Marchuk splitting scheme suggests that the self-attention and position-wise feed-forward network (FFN) sub-layers should not be treated equally. Instead, in each layer, two position-wise FFN sub-layers should be used, and the self-attention sub-layer is placed in between. This leads to a brand new architecture. Such an FFN-attention-FFN layer is “Macaron-like”, and thus we call the network with this new architecture the Macaron Net. Through extensive experiments, we show that the Macaron Net is superior to the Transformer on both supervised and unsupervised learning tasks. The reproducible code can be found on http://anonymized
[]
[ { "authors": [ "Karim Ahmed", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Weighted transformer network for machine translation", "venue": "arXiv preprint arXiv:1711.02132,", "year": 2017 }, { "authors": [ "Rami Al-Rfou", "Dokook Choe", "Noah Constant", "Mandy Guo", "Llion Jones" ], "title": "Character-level language modeling with deeper self-attention", "venue": "arXiv preprint arXiv:1808.04444,", "year": 2018 }, { "authors": [ "Uri M Ascher", "Linda R Petzold" ], "title": "Computer methods for ordinary differential equations and differential-algebraic equations, volume 61", "venue": null, "year": 1998 }, { "authors": [ "Luisa Bentivogli", "Ido Dagan", "Hoa Trang Dang", "Danilo Giampiccolo", "Bernardo. Magnini" ], "title": "The fifth PASCAL recognizing textual entailment challenge", "venue": "In TAC,", "year": 2009 }, { "authors": [ "Alexander V Bobylev", "Taku Ohwada" ], "title": "The error of the splitting scheme for solving evolutionary equations", "venue": "Applied Mathematics Letters,", "year": 2001 }, { "authors": [ "Daniel Cer", "Mona Diab", "Eneko Agirre", "Inigo Lopez-Gazpio", "Lucia Specia" ], "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "venue": "arXiv preprint arXiv:1708.00055,", "year": 2017 }, { "authors": [ "Bo Chang", "Minmin Chen", "Eldad Haber", "Ed H Chi" ], "title": "Antisymmetricrnn: A dynamical system view on recurrent neural networks", "venue": null, "year": 1902 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Z. Chen", "H. Zhang", "X. Zhang", "L. Zhao. Quora question pairs." ], "title": "URL https://data", "venue": "quora.com/First-Quora-Dataset-Release-Question-Pairs.", "year": 2018 }, { "authors": [ "Carmen Chicone" ], "title": "Ordinary differential equations by vladimir", "venue": "i. arnold. Siam Review,", "year": 2007 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "William W Cohen", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "William B Dolan", "Chris. Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the International Workshop on Paraphrasing.,", "year": 2005 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "arXiv preprint arXiv:1808.09381,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional Sequence to Sequence Learning", "venue": "In Proc. of ICML,", "year": 2017 }, { "authors": [ "Juergen Geiser" ], "title": "Decomposition methods for differential equations: theory and applications", "venue": "CRC Press,", "year": 2009 }, { "authors": [ "Roland Glowinski", "Stanley J Osher", "Wotao Yin" ], "title": "Splitting methods in communication, imaging", "venue": null, "year": 2017 }, { "authors": [ "Eldad Haber", "Lars Ruthotto" ], "title": "Stable architectures for deep neural networks", "venue": "Inverse Problems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Philipp Koehn", "Hieu Hoang", "Alexandra Birch", "Chris Callison-Burch", "Marcello Federico", "Nicola Bertoldi", "Brooke Cowan", "Wade Shen", "Christine Moran", "Richard Zens", "Chris Dyer", "Ondrej Bojar", "Alexandra Constantin", "Evan Herbst. Moses" ], "title": "Open source toolkit for statistical machine translation", "venue": "In ACL,", "year": 2007 }, { "authors": [ "Hector J Levesque", "Ernest Davis", "Leora. Morgenstern" ], "title": "The Winograd schema challenge", "venue": "In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.,", "year": 2011 }, { "authors": [ "Qianli Liao", "Tomaso Poggio" ], "title": "Bridging the gaps between residual learning, recurrent neural networks and visual cortex", "venue": "arXiv preprint arXiv:1604.03640,", "year": 2016 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "arXiv preprint arXiv:1710.10121,", "year": 2017 }, { "authors": [ "Brian W Matthews" ], "title": "Comparison of the predicted and observed secondary structure of t4 phage lysozyme", "venue": "Biochimica et Biophysica Acta (BBA)-Protein Structure,", "year": 1975 }, { "authors": [ "Robert I McLachlan", "G Reinout W" ], "title": "Quispel. Splitting methods", "venue": "Acta Numerica,", "year": 2002 }, { "authors": [ "Forest Ray Moulton" ], "title": "An introduction to celestial mechanics", "venue": "Courier Corporation,", "year": 2012 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "arXiv preprint arXiv:1806.00187,", "year": 2018 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy. Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of EMNLP.,", "year": 2016 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "arXiv preprint arXiv:1803.02155,", "year": 2018 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher. Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of EMNLP.,", "year": 2013 }, { "authors": [ "Sho Sonoda", "Noboru Murata" ], "title": "Transport analysis of infinitely deep neural network", "venue": "The Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Gilbert Strang" ], "title": "On the construction and comparison of difference schemes", "venue": "SIAM Journal on Numerical Analysis,", "year": 1968 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yunzhe Tao", "Qi Sun", "Qiang Du", "Wei Liu" ], "title": "Nonlocal neural networks, nonlocal diffusion and nonlocal modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Matthew Thorpe", "Yves van Gennip" ], "title": "Deep limits of residual neural networks", "venue": "arXiv preprint arXiv:1810.11741,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In the Proceedings of ICLR", "year": 2019 }, { "authors": [ "Alex Warstadt", "Amanpreet Singh", "Samuel R. Bowman" ], "title": "Neural network acceptability judgments", "venue": "arXiv preprint 1805.12471,", "year": 2018 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Proceedings of NAACL-HLT.,", "year": 2018 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "arXiv preprint arXiv:1901.10430,", "year": 2019 }, { "authors": [ "Lijun Wu", "Yiren Wang", "Yingce Xia", "Fei Tian", "Fei Gao", "Tao Qin", "Jianhuang Lai", "Tie-Yan Liu" ], "title": "Depth growing for neural machine translation", "venue": "CoRR, abs/1907.01968,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Yann N Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "arXiv preprint arXiv:1901.09321,", "year": 2019 }, { "authors": [ "Xiaoshuai Zhang", "Yiping Lu", "Jiaying Liu", "Bin Dong" ], "title": "Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mai Zhu", "Bo Chang", "Chong Fu" ], "title": "Convolutional neural networks combined with runge-kutta methods", "venue": "arXiv preprint arXiv:1802.08831,", "year": 2018 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Richard Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In arXiv preprint arXiv:1506.06724,", "year": 2015 }, { "authors": [ "Vaswani" ], "title": "For the big setting, we enlarge the batch size and learning", "venue": null, "year": 2017 }, { "authors": [ "Devlin" ], "title": "Under review as a conference paper at ICLR 2020 B.2 UNSUPERVISED PRETRAINING Pre-training dataset We follow Devlin et al. (2018) to use English Wikipedia corpus and BookCorpus for pre-training", "venue": "As the dataset BookCorpus (Zhu et al.,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The Transformer is one of the most commonly used neural network architectures in natural language processing. Variants of the Transformer have achieved state-of-the-art performance in many tasks including language modeling (Dai et al., 2019; Al-Rfou et al., 2018) and machine translation (Vaswani et al., 2017; Dehghani et al., 2018; Edunov et al., 2018). Unsupervised pre-trained models based on the Transformer architecture also show impressive performance in many downstream tasks (Radford et al., 2019; Devlin et al., 2018).\nThe Transformer architecture is mainly built by stacking layers, each of which consists of two sub-layers with residual connections: the self-attention sub-layer and the position-wise feed-forward network (FFN) sub-layer. For a given sentence, the self-attention sub-layer considers the semantics and dependencies of words at different positions and uses that information to capture the internal structure and representations of the sentence. The position-wise FFN sub-layer is applied to each position separately and identically to encode context at each position into higher-level representations. Although the Transformer architecture has demonstrated promising results in many tasks, its design principle is not fully understood, and thus the strength of the architecture is not fully exploited. As far as we know, there is little work studying the foundation of the Transformer or different design choices.\nIn this paper, we provide a novel perspective towards understanding the architecture. In particular, we are the first to show that the Transformer architecture is inherently related to the Multi-Particle\nDynamic System (MPDS) in physics. MPDS is a well-established research field which aims at modeling how a collection of particles move in the space using differential equations (Moulton, 2012).\nIn MPDS, the behavior of each particle is usually modeled by two factors separately. The first factor is the convection which concerns the mechanism of each particle regardless of other particles in the system, and the second factor is the diffusion which models the movement of the particle resulting from other particles in the system.\nInspired by the relationship between the ODE and neural networks (Lu et al., 2017; Chen et al., 2018a), we first show that the Transformer layers can be naturally interpreted as a numerical ODE solver for a first-order convection-diffusion equation in MPDS. To be more specific, the selfattention sub-layer, which transforms the semantics at one position by attending over all other\npositions, corresponds to the diffusion term; The position-wise FFN sub-layer, which is applied to each position separately and identically, corresponds to the convection term. The number of stacked layers in the Transformer corresponds to the time dimension in ODE. In this way, the stack of self-attention sub-layers and position-wise FFN sub-layers with residual connections can be viewed as solving the ODE problem numerically using the Lie-Trotter splitting scheme (Geiser, 2009) and the Euler’s method (Ascher & Petzold, 1998). By this interpretation, we have a novel understanding of learning contextual representations of a sentence using the Transformer: the feature (a.k.a, embedding) of words in a sequence can be considered as the initial positions of a collection of particles, and the latent representations abstracted in stacked Transformer layers can be viewed as the location of particles moving in a high-dimensional space at different time points.\nSuch an interpretation not only provides a new perspective on the Transformer but also inspires us to design new structures by leveraging the rich literature of numerical analysis. The Lie-Trotter splitting scheme is simple but not accurate and often leads to high approximation error (Geiser, 2009). The Strang-Marchuk splitting scheme (Strang, 1968) is developed to reduce the approximation error by a simple modification to the Lie-Trotter splitting scheme and is theoretically more accurate. Mapped to neural network design, the Strang-Marchuk splitting scheme suggests that there should be three sub-layers: two position-wise feed-forward sub-layers with half-step residual connections and one self-attention sub-layer placed in between with a full-step residual connection. By doing so, the stacked layers will be more accurate from the ODE’s perspective and will lead to better performance in deep learning. As the FFN-attention-FFN layer is “Macaron-like”, we call it Macaron layer and call the network composed of Macaron layers the Macaron Net.\nWe conduct extensive experiments on both supervised and unsupervised learning tasks. For each task, we replace Transformer layers by Macaron layers and keep the number of parameters to be the same. Experiments show that the Macaron Net can achieve higher accuracy than the Transformer on all tasks which, in a way, is consistent with the ODE theory." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 RELATIONSHIP BETWEEN NEURAL NETWORKS AND ODE", "text": "Recently, there are extensive studies to bridge deep neural networks with ordinary differential equations (Weinan, 2017; Lu et al., 2017; Haber & Ruthotto, 2017; Chen et al., 2018a; Zhang et al., 2019b; Sonoda & Murata, 2019; Thorpe & van Gennip, 2018). We here present a brief introduction to such a relationship and discuss how previous works borrow powerful tools from numerical analysis to help deep neural network design.\nA first-order ODE problem is usually defined as to solve the equation (i.e., calculate x(t) for any t) which satisfies the following first-order derivative and the initial condition:\ndx(t)\ndt = f(x, t), x(t0) = w, (1)\nin which x(t) ∈ Rd for all t ≥ t0. ODEs usually have physical interpretations. For example, x(t) can be considered as the location of a particle moving in the d-dimensional space and the first order time derivative can be considered as the velocity of the particle.\nUsually there is no analytic solution to Eqn (1) and the problem has to be solved numerically. The simplest numerical ODE solver is the Euler’s method (Ascher & Petzold, 1998). The Euler’s method discretizes the time derivative dx(t)dt by its first-order approximation x(t2)−x(t1) t2−t1 ≈ f(x(t1), t1). By doing so, for the fixed time horizon T = t0 + γL, we can estimate x(T ) from x0 . = x(t0) by sequentially estimating xl+1 . = x(tl+1) using\nxl+1 = xl + γf(xl, tl) (2)\nwhere l = 0, · · · , L − 1, tl = t0 + γl is the time point corresponds to xl, and γ = (T − t0)/L is the step size. As we can see, this is mathematically equivalent to the ResNet architecture (Lu et al., 2017; Chen et al., 2018a): The function γf(xl, tl) can be considered as a neural-network block, and the second argument tl in the function indicates the set of parameters in the l-th layer. The simple temporal discretization by Euler’s method naturally leads to the residual connection.\nObserving such a strong relationship, researchers use ODE theory to explain and improve the neural network architectures mainly designed for computer vision tasks. Lu et al. (2017); Chen et al. (2018a) show any parametric ODE solver can be viewed as a deep residual network (probably with infinite layers), and the parameters in the ODE can be optimized through backpropagation. Recent works discover that new neural networks inspired by sophisticated numerical ODE solvers can lead to better performance. For example, Zhu et al. (2018) uses a high-precision Runge-Kutta method to design a neural network, and the new architecture achieves higher accuracy. Haber & Ruthotto (2017) uses a leap-frog method to construct a reversible neural network. Liao & Poggio (2016); Chang et al. (2019) try to understand recurrent neural networks from the ODE’s perspective, and Tao et al. (2018) uses non-local differential equations to model non-local neural networks." }, { "heading": "2.2 TRANSFORMER", "text": "The Transformer architecture is usually developed by stacking Transformer layers (Vaswani et al., 2017; Devlin et al., 2018). A Transformer layer operates on a sequence of vectors and outputs a new sequence of the same shape. The computation inside a layer is decomposed into two steps: the vectors first pass through a (multi-head) self-attention sub-layer and the output will be further put into a position-wise feed-forward network sub-layer. Residual connection (He et al., 2016) and layer normalization (Lei Ba et al., 2016) are employed for both sub-layers. The visualization of a Transformer layer is shown in Figure 2(a) and the two sub-layers are defined as below.\nSelf-attention sub-layer The attention mechanism can be formulated as querying a dictionary with key-value pairs (Vaswani et al., 2017), e.g., Attention(Q,K, V ) = softmax(QKT / √ dmodel) · V, where dmodel is the dimensionality of the hidden representations and Q (Query), K (Key), V (Value) are specified as the hidden representations of the previous layer in the so-called self-attention sublayers in the Transformer architecture. The multi-head variant of attention allows the model to jointly attend to information from different representation subspaces, and is defined as\nMulti-head(Q,K, V ) = Concat(head1, · · · , headH)WO, (3) headk = Attention(QW Q k ,KW K k , V W V k ), (4)\nwhere WQk ∈ Rdmodel×dK ,WKk ∈ Rdmodel×dK ,WVk ∈ Rdmodel×dV , and WO ∈ RHdV ×dmodel are project parameter matrices, H is the number of heads, and dK and dV are the dimensionalities of Key and Value.\nPosition-wise FFN sub-layer In addition to the self-attention sub-layer, each Transformer layer also contains a fully connected feed-forward network, which is applied to each position separately and identically. This feed-forward network consists of two linear transformations with an activation function σ in between. Specially, given vectors h1, . . . , hn, a position-wise FFN sub-layer transforms each hi as FFN(hi) = σ(hiW1 + b1)W2 + b2, where W1,W2, b1 and b2 are parameters.\nIn this paper, we take the first attempt to provide an understanding of the feature extraction process in natural language processing from the ODE’s viewpoint. As discussed in Section 2.1, several\nworks interpret the standard ResNet using the ODE theory. However, we found this interpretation cannot be directly applied to the Transformer architecture. First, different from vision applications whose size of the input (e.g., an image) is usually predefined and fixed, the input (e.g., a sentence) in natural language processing is always of variable length, which makes the single-particle ODE formulation used in previous works not applicable. Second, the Transformer layer contains very distinct sub-layers. The self-attention sub-layer takes the information from all positions as input while the position-wise feed-forward layer is applied to each position separately. How to interpret these heterogeneous components by ODE is also not covered by previous works (Tao et al., 2018; Chen et al., 2018a)." }, { "heading": "3 REFORMULATE TRANSFORMER LAYERS AS AN ODE SOLVER FOR MULTI-PARTICLE DYNAMIC SYSTEM", "text": "In this section, we first introduce the general form of differential equations in MPDS and then reformulate the stacked Transformer layers to show they form a numerical ODE solver for a specific problem. After that, we use advanced methods in the ODE theory to design new architectures." }, { "heading": "3.1 MULTI-PARTICLE ODE AND ITS NUMERICAL SOLVER", "text": "Understanding the dynamics of multiple particles’ movements in space is one of the important problems in physics, especially in fluid mechanics and astrophysics (Moulton, 2012). The behavior of each particle is usually modeled by two factors: The first factor concerns about the mechanism of its movement regardless of other particles, e.g., caused by an external force outside of the system, which is usually referred to as the convection; The second factor concerns about the movement resulting from other particles, which is usually referred to as the diffusion. Mathematically, assume there are n particles in d-dimensional space. Denote xi(t) ∈ Rd as the location of i-th particle at time t. The dynamics of particle i can be formulated as\ndxi(t)\ndt = F (xi(t), [x1(t), · · · , xn(t)], t) +G(xi(t), t),\nxi(t0) = wi, i = 1, . . . , n. (5)\nFunction F (xi(t), [x1(t), · · · , xn(t)], t) represents the diffusion term which characterizes the interaction between the particles. G(x, t) is a function which takes a location x and time t as input and represents the convection term.\nSplitting schemes As we can see, there are two coupled terms in the right-hand side of Eqn (5) describing different physical phenomena. Numerical methods of directly solving such ODEs can be complicated. The splitting method is a prevailing way of solving such coupled differential equations that can be decomposed into a sum of differential operators (McLachlan & Quispel, 2002). Furthermore, splitting convection from diffusion is quite standard for many convection-diffusion equations (Glowinski et al., 2017; Geiser, 2009). The Lie-Trotter splitting scheme (Geiser, 2009) is the simplest splitting method. It splits the right-hand side of Eqn (5) into function F (·) and G(·) and solves the individual dynamics alternatively. More precisely, to compute xi(t+ γ) from xi(t), the Lie-Trotter splitting scheme with the Euler’s method reads as\nx̃i(t) = xi(t) + γF (xi(t), [x1(t), x2(t), · · · , xn(t)], t), (6) xi(t+ γ) = x̃i(t) + γG(x̃i(t), t). (7)\nFrom time t to time t+ γ, the Lie-Trotter splitting method first solves the ODE with respect to F (·) and acquire an intermediate location x̃i(t). Then, starting from x̃i(t) , it solves the second ODE with respect to G(·) to obtain xi(t+ γ)." }, { "heading": "3.2 PHYSICAL INTERPRETATION OF THE TRANSFORMER", "text": "We reformulate the two sub-layers of the Transformer in order to match its form with the ODE described above. Denote xl = (xl,1, . . . , xl,n) as the input to the l-th Transformer layer, where n is the sequence length and xl,i is a real-valued vector in Rd for any i.\nReformulation of the self-attention sub-layer Denote x̃l,i as the output of the (multi-head) selfattention sub-layer at position i with residual connections. The computation of x̃l,i can be written as\nx̃l,i = xl,i + Concat (head1, ..., headH)WO,l, (8)\nwhere headk = n∑ j=1 α (k) ij [xl,jW V,l k ] = n∑ j=1\n( exp(e\n(k) ij )∑n\nq=1 exp(e (k) iq )\n) [xl,jW V,l k ], (9)\nand e(k)ij is computed as the dot product of input xl,i and xl,j with linear projection matrices W Q,l k and WK,lk , i.e., e (k) ij = d −1/2 model · (xl,iW Q,l k )(xl,jW K,l k ) T . Considering α(k)ij as a normalized value of the pair-wise dot product e(k)ij over j, we can generally reformulate Eqn (8) as\nx̃l,i = xl,i + MultiHeadAttW latt (xl,i, [xl,1, xl,2, · · · , xl,n]), (10)\nwhere W latt denotes all trainable parameters in the l-th self-attention sub-layer.\nReformulation of the position-wise FFN sub-layer Next, x̃l,i is put into the position-wise FFN sub-layer with residual connections and output xl+1,i. The computation of xl+1,i can be written as\nxl+1,i = x̃l,i + FFNW lffn (x̃l,i), (11)\nwhere W lffn denotes all trainable parameters in the l-th position-wise FFN sub-layer.\nReformulation of Transformer layers Combining Eqn (10) and (11), we reformulate the Transformer layers1 as\nx̃l,i = xl,i + MultiHeadAttW latt (xl,i, [xl,1, xl,2, · · · , xl,n]), (12) xl+1,i = x̃l,i + FFNW lffn (x̃l,i). (13)\nWe can see that the Transformer layers (Eqn (12-13)) resemble the multi-particle ODE solver in Section 3.1 (Eqn (6-7)). Indeed, we can formally establish the link between the ODE solver with splitting scheme and stacked Transformer layers as below.\nClaim 1. Define γF ∗(xl,i, [xl,1, · · · , xl,n], tl) = MultiHeadAttW latt (xl,i, [xl,1, · · · , xl,n]) and γG∗(xl,i, tl) = FFNW lffn (xl,i). The Transformer can be viewed as a numerical ODE solver using Lie-Trotter splitting scheme and the Euler’s method (with time step γ) for Eqn (5) with F ∗ and G∗.\nThe above observation grants a physical interpretation of natural language processing and provides a new perspective on the Transformer architecture. First, this perspective provides a unified view of the heterogeneous components in the Transformer. The self-attention sub-layer is viewed as a diffusion term which characterizes the particle interactions while the position-wise feed-forward network sub-layer is viewed as a convection term. The two terms together naturally form the convectiondiffusion equation in physics. Second, this interpretation advances our understanding of the latent representations of language through the Transformer. Viewing the feature (a.k.a., embedding) of words in a sequence as the initial position of particles, we can interpret the latent representations of the sentence abstracted by the Transformer as particles moving in a high-dimensional space as demonstrated in Figure 1 (Zhang, 2019)." }, { "heading": "3.3 IMPROVING TRANSFORMER VIA STRANG-MARCHUK SPLITTING SCHEME", "text": "In the previous subsection, we have successfully mapped the Transformer architecture to a numerical ODE solver for MPDS. However, we would like to point out that one of the key components in this ODE solver, the Lie-Trotter splitting scheme, is the simplest one but has relatively high errors. In this\n1Layer normalization is sometimes applied to the sub-layers but recent work (Zhang et al., 2019a) shows that the normalization trick is not essential and can be removed. One can still readily check that the reformulation (Eqn (12) and (13)) still holds with layer normalization.\nsubsection, we incorporate one of the most popular and widely used splitting scheme (Geiser, 2009), the Strang-Marchuk splitting scheme, into the design of the neural networks.\nThe Lie-Trotter splitting scheme solves the dynamics of F (·) and G(·) alternatively and exclusively in that order. This inevitably brings bias and leads to higher local truncation errors (Geiser, 2009). To mitigate the bias, we use a simple modification to the Lie-Trotter splitting scheme by dividing the one-step numerical solver for G(·) into two half-steps: we put one half-step before solving F (·) and put the other half-step after solving F (·). This modified splitting scheme is known as the Strang-Marchuk splitting scheme (Strang, 1968). Mathematically, to compute xi(t+ γ) from xi(t), the Strang-Marchuk splitting scheme reads as\nx̃i(t) = xi(t) + γ\n2 G(xi(t), t), (14)\nx̂i(t) = x̃i(t) + γF (x̃i(t), [x̃1(t), x̃2(t), · · · , x̃n(t)], t), (15)\nxi(t+ γ) = x̂i(t) + γ 2 G ( x̂i(t), t+ γ 2 ) . (16)\nThe Strang-Marchuk splitting scheme enjoys higher-order accuracy than the Lie-Trotter splitting scheme (Bobylev & Ohwada, 2001) in terms of the local truncation error (Ascher & Petzold, 1998), which measures the per-step distance between the true solution and the approximated solution using numerical schemes. Mathematically, for a differential equation dx(t)dt = f(x, t) and a numerical scheme A, the local truncation error of numerical schemeA is defined as τ = x(t+ γ)−A(x(t), γ). For example, when A is the Euler’s method, τEuler = x(t+ γ)− x(t)− γf(x(t), t). The order of local truncation error of the two schemes has been studied in Bobylev & Ohwada (2001), as shown in the following theorem. Theorem 1. (Bobylev & Ohwada, 2001) The local truncation error of the Lie-Trotter splitting scheme is second-order (O(γ2)) and the local truncation error of the Strang-Marchuk splitting scheme is third-order (O(γ3)).\nFor completeness, we provide the formal theorem with proof in the appendix. We can see from Eqn (14-16) that the Strang-Marchuk splitting scheme uses a three-step process to solve the ODE. Mapped to neural network design, the Strang-Marchuk splitting scheme (together with the Euler’s method) suggests there should also be three sub-layers instead of the two sub-layers in the Transformer. By replacing function γF and γG by MultiHeadAtt and FFN, we have\nx̃l,i = xl,i + 1\n2 FFNW l,downffn (xl,i), (17)\nx̂l,i = x̃l,i + MultiHeadAttW latt (x̃l,i, [x̃l,1, x̃l,2, · · · , x̃l,n]), (18)\nxl+1,i = x̂l,i + 1\n2 FFNW l,upffn (x̂l,i). (19)\nFrom Eqn (17-19), we can see that the new layer composes of three sub-layers. Each hidden vector at different positions will first pass through the first position-wise FFN sub-layer with a half-step 2 residual connection (“12” in Eqn (17)), and then the output vectors will be feed into a self-attention sub-layer. In the last step, the output vectors from the self-attention sub-layer will be put into the second position-wise FFN sub-layer with a half-step residual connection. Since the FFN-attentionFFN structure is “Macaron”-like, we call the layer as Macaron layer and call the network using Macaron layers as Macaron Net, as shown in Figure 2(b). Previous works (Lu et al., 2017; Zhu et al., 2018) have successfully demonstrated that the neural network architectures inspired by higher-order accurate numerical ODE solvers will lead to better results in deep learning and we believe the Macaron Net can achieve better performance on practical natural language processing applications than the Transformer." }, { "heading": "4 EXPERIMENTS", "text": "We test our proposed Macaron architectures in both supervised and unsupervised learning setting. For the supervised learning setting, we use IWLST14 and WMT14 machine translation datasets. For the unsupervised learning setting, we pretrain the model using the same method as in Devlin et al. (2018) and test the learned model over a set of downstream tasks. Extra descriptions about datasets, model specifications, and hyperparameter configurations can be found in the appendix." }, { "heading": "4.1 EXPERIMENT SETTINGS", "text": "Machine Translation Machine translation is an important application for natural language processing (Vaswani et al., 2017). We evaluate our methods on two widely used public datasets: IWSLT14 German-to-English (De-En) and WMT14 English-to-German (En-De) dataset.\nFor the WMT14 dataset, the basic configurations of the Transformer architecture are the base and the big settings (Vaswani et al., 2017). Both of them consist of a 6-layer encoder and 6-layer decoder. The size of the hidden nodes and embeddings are set to 512 for base and 1024 for big. The number of heads are 8 for base and 16 for big. Since the IWSLT14 dataset is much smaller than the WMT14 dataset, the small setting is usually used, whose size of hidden states and embeddings is set to 512 and the number of heads is set to 4. For all settings, the dimensionality of the inner-layer of the position-wise FFN is four times of the dimensionality of the hidden states.\nFor each setting (base, big and small), we replace all Transformer layers by the Macaron layers3 and obtain the base, big and small Macaron, each of which contains two position-wise feed-\n2The half-step is critical in the Strang-Marchuk splitting scheme to solve an ODE problem, but it may be not essential in training a particular neural network. For example, the FFN sub-layer in the Transformer is designed as FFN(hi) = σ(hiW1 + b1)W2 + b2. Placing 1/2 to rescale this FFN sub-layer is equivalent to rescale the parameter W2 and b2 from the beginning of the optimization.\n3The translation model is based on the encoder-decoder framework. In the Transformer, the decoder layer has a third sub-layer which performs multi-head attention over the output of the encoder stack (encoder-decoderattention) and a mask to prevent positions from attending to subsequent positions. In our implementation of Macaron decoder, we also use masks and split the FFN into two sub-layers and thus our decoder layer is (FFN, self-attention, encoder-decoder-attention, and FFN).\nforward sub-layers in a layer. To make a fair comparison, we set the dimensionality of the inner-layer of the two FFN sub-layers in the Macaron layers to two times of the dimensionality of the hidden states. By doing this, the base, big and small Macaron have the same number of parameters as the base, big and small Transformer respectively.\nUnsupervised Pretraining BERT (Devlin et al., 2018) is the current state-of-the-art pre-trained contextual representation model based on a multi-layer Transformer encoder architecture and trained by masked language modeling and next-sentence prediction tasks. We compare our proposed Macaron Net with the base setting from the original paper (Devlin et al., 2018), which consists of 12 Transformer layers. The size of hidden states and embeddings are set to 768, and the number of attention heads is set to 12. Similarly, we replace the Transformer layers in BERT base by the Macaron layers and reduce the dimensionality of the inner-layer of the two FFN sub-layers by half, and thus we keep the number of parameters of our Macaron base same as BERT base." }, { "heading": "4.2 EXPERIMENT RESULTS", "text": "Machine Translation We use BLEU (Papineni et al., 2002) as the evaluation measure for machine translation. Following common practice, we use tokenized case-sensitive BLEU and case-insensitive BLEU for WMT14 En-De and IWSLT14 De-En respectively.\nThe results for machine translation are shown in Table 1. For the IWSLT14 dataset, our Macaron small outperforms the Transformer small by 1.0 in terms of BLEU. For the WMT14 dataset, our Macaron base outperforms its Transformer counterpart by 1.6 BLEU points. Furthermore, the performance of our Macaron base model is even better than that of the Transformer big model reported in Vaswani et al. (2017), but with much less number of parameters. Our Macaron big outperforms the Transformer big by 1.8 BLEU points. Comparing with other concurrent works, the improvements in our proposed method are still significant.\nUnsupervised Pretraining Following Devlin et al. (2018), we evaluate all models by fine-tuning them on 8 downstream tasks in the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), including CoLA (Warstadt et al., 2018), SST-2 (Socher et al., 2013), MRPC (Dolan & Brockett, 2005), STS-B (Cer et al., 2017), QQP (Chen et al., 2018b), MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and RTE (Bentivogli et al., 2009). More details about individual tasks and their evaluation metrics can be found in the appendix and Wang et al. (2019); Devlin et al. (2018). To fine-tune the models, we follow the hyperparameter search space in Devlin et al. (2018) for all downstream tasks, including different batch sizes, learning rates, and numbers of epochs.\nThe GLUE results are presented in Table 2. We present the results of two BERT base models. One is from Devlin et al. (2018), and the other is trained using our own data. Due to that some pieces of data used in Devlin et al. (2018) are no longer freely distributed, the two numbers are slightly different. We can see from the table, our proposed Macaron Net base outperforms all baselines in terms of the general GLUE score. Specifically, given the same dataset, our proposed model outperforms our trained BERT base model in all tasks. Even comparing with the BERT base model in Devlin et al. (2018), our model performs better in 6 out of 8 tasks and achieves close performance in the rest 2 tasks.\nAs a summary, the improvement in both machine translation and GLUE tasks well aligns with the ODE theory and our proposed architecture performs better than the Transformer in real practice." }, { "heading": "4.3 ADDITIONAL COMPARISONS", "text": "As we can see, the main difference between Macaron Net and the Transformer is that Macaron Net uses two FFN sub-layers in a layer block while the Transformer just uses one. One may argue that the improvements of the new architecture may be simply due to adding nonlinearities but not from ODE theory. We point out here that adding nonlinearities does not always guarantee improvements in deep learning. For example, feed-forward neural networks contain much more nonlinearities than convolutional neural networks but the performance of feed-forward neural networks is superior than that of convolutional neural networks in image classification. Furthermore, Wu et al. (2019b) shows that simply adding nonlinearities by stacking more Transformer layers does not work well in practice.\nTo evaluate the effectiveness of our network, we further conducted experiments to show the ODEinspired way of adding nonlinearities is better than following heuristic methods:\n• Att-FFN-Att: In Macaron Net, we use two FFN sub-layers and one attention sub-layer in a layer. Here we construct a baseline that uses two attention sub-layers and one FFN sub-layer layer. Note that the attention sub-layer also contains nonlinearities in the softmax and dot-product operations.\n• No residual: In the Transformer, the FFN sub-layer has one hidden layer. We changed it to two hidden layers without residual connections, which increases the nonlinearities.\nWe compare these two models with Macaron Net on WMT14 En-De task and keep all the model parameters to be the same as the Transformer big setting. We list the performance of different models in Table 3. We can see that the BLEU scores of two models are 28.6/28.3 respectively. As a comparison, the BLEU score of Macaron Net is 30.2. We also cite the results from Wu et al. (2019b), which shows that simply stacking more Transformer layers cannot reach comparable performance. Therefore, we believe our ODE-inspired architecture design is principled and better than heuristics, which provides a better understanding to the Transformer model.\nFurthermore, we can also see that the empirical results are consistent with the ODE theories. The step size γ in ODE relates to #layers in deep learning. In detail, a small step size γ maps to a large value of #layer: a neural network with more layers stacked corresponds to an ODE solver with smaller step size γ. In Table 1 and 2, we can see that that given the same step size γ (#layers), our Macaron Net is better than the Transformer. Results in Wu et al. (2019b) show that even using a smaller γ (8-layer or 10-layer Transformer), our Macaron Net (6-layer) is still better. These results are consistent with ODE theories: (1) A higher-order ODE solver is better given the same step size (6-layer Macaron Net v.s. 6-layer Transformer). (2) The order matters, a higher-order ODE solver works well even using a relatively larger step size (6-layer Macaron Net v.s. 10-layer Transformer)." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we interpret the Transformer as a numerical ODE solver for a convection-diffusion equation in a multi-particle dynamic system. Specifically, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles’ movement in the space using the Lie-Trotter splitting scheme and the Euler’s method. By replacing the Lie-Trotter splitting scheme with the more advanced Strang-Marchuk\nsplitting scheme, we obtain a new architecture. The improvements in real applications show the effectiveness of the new model and are consistent with the ODE theory. In the future, we will explore deeper connections between the ODE theory and the Transformer models and use the ODE theory to improve the individual components in the Transformer architecture such as attention modules." }, { "heading": "A PROOF OF THE THEOREM", "text": "Theorem 2. (Bobylev & Ohwada, 2001) We denote the true solution of equation dxdt = F (x), x(0) = y0 4 at time t as x(t) = StF (y0). Simliarily, we can define S t G and S t F+G. The local truncation error at t = 0 of the Lie-Trotter splitting scheme is SγF+G(y0)− S γ G[S γ F (y0)] = γ2 2 {F ′(y0)(y0)G(y0)− G′(y0)(y0)F (y0)}+O(γ3) which is second-order. The local truncation error at t = 0 of the StrangMarchuk splitting scheme Sγ/2G {S γ F [S γ/2 G (y0)]} is S γ F+G(y0)−S γ/2 G {S γ F [S γ/2 G (y0)]} = O(γ3) which is third-order.\nProof. Since\nSγF (y0) = y0 + γF (y0) + γ2\n2 F ′(y0)F (y0) +O(γ 3),\nSγG(y0) = y0 + γG(y0) + γ2\n2 G′(y0)G(y0) +O(γ 3),\nwe have\nSγG[S γ F (y0)] = S γ F (y0) + γG(S γ F (y0)) +\nγ2\n2 G′(SγF (y0))G(S γ F (y0)) +O(γ 3).\nAt the same time, we have\nG(y0 + γF (y0) +O(γ 2)) = G(y0) + γG ′(y0)F (y0) +O(γ 2),\nG′(SγF (y0))G(S γ F (y0)) = G ′(y0)G(y0) +O(γ).\nCombine the estimations we have\nSγG[S γ F (y0)] = y0 + γ[F (y0) +G(y0)]\n+ γ2\n2 [G′(y0)G(y0) + F ′(y0)F (y0) + 2G ′(y0)F (y0)] +O(γ 3).\nAs a result, we can estimate the local truncation error of Lie-Trotter splitting scheme as\nSγF+G(y0)− S γ G[S γ F (y0)]\n= y0 + γ(F (y0) +G(y0)) + γ2\n2 (F ′(y0) +G ′(y0))(F (y0) +G(y0)) +O(γ 3)\n− (y0 + γ[F (y0) +G(y0)] + γ2\n2 [G′(y0)G(y0) + F ′(y0)F (y0) + 2G ′(y0)F (y0)] +O(γ 3))\n= γ2\n2 {F ′(y0)G(y0)−G′(y0)F (y0)}+O(γ3).\nTo estimate the Strang-Marchuk splitting scheme’s local truncation error, we rewrite the StrangMarchuk splitting scheme as\nS γ/2 G {S γ F [S γ/2 G (y0)]} = S γ/2 G {S γ/2 F {S γ/2 F [S γ/2 G (y0)]}}.\nFrom the previous estimation of Lie–Trotter splitting scheme we have\nS γ/2 F+G(y0)− S γ G[S γ F (y0)] =\nγ2\n8 {F ′(y0)G(y0)−G′(y0)F (y0)}+O(γ3),\nS γ/2 F+G(y0)− S γ F [S γ G(y0)] =\nγ2\n8 {G′(y0)F (y0)− F ′(y0)G(y0)}+O(γ3).\n4Since a time-dependent ODE can be formulated as a time-independent ODE by introducing an auxiliary variable (Chicone, 2007), the theorem here developed for time-independent ODEs can also be applied to time-dependent ODEs without loss of generality.\nCombine the two estimations, we have\nS γ/2 G {S γ/2 F {S γ/2 F [S γ/2 G (y0)]}} = S γ F+G(y0) +\nγ2\n8 {F ′(y0)G(y0)−G′(y0)F (y0)}\n+ γ2\n8 {G′(y0)F (y0)− F ′(y0)G(y0)}+O(γ3)\n= SγF+G(y0) +O(γ 3)." }, { "heading": "B EXPERIMENT SETTINGS", "text": "B.1 MACHINE TRANSLATION\nDataset The training/validation/test sets of the IWSLT14 dataset contain about 153K/7K/7K sentence pairs, respectively. We use a vocabulary of 10K tokens based on a joint source and target byte pair encoding (BPE) (Sennrich et al., 2016). For WMT14 dataset, we replicate the setup of Vaswani et al. (2017), which contains 4.5M training parallel sentence pairs. Newstest2014 is used as the test set, and Newstest2013 is used as the validation set. The 37K vocabulary for WMT14 is based on a joint source and target BPE factorization.\nModel For the WMT14 dataset, the basic configurations of the Transformer architecture are the base and the big settings (Vaswani et al., 2017). Both of them consist of a 6-layer encoder and 6-layer decoder. The size of the hidden nodes and embeddings are set to 512 for base and 1024 for big. The number of heads are 8 for base and 16 for big. Since the IWSLT14 dataset is much smaller than the WMT14 dataset, the small setting is usually used, whose size of hidden states and embeddings is set to 512 and the number of heads is set to 4. For all settings, the dimensionality of the inner-layer of the position-wise FFN is four times of the dimensionality of the hidden states.\nFor each setting (base, big and small), we replace all Transformer layers by the Macaron layers and obtain the base, big and small Macaron, each of which contains two position-wise feedforward sub-layers in a layer. The translation model is based on the encoder-decoder framework. In the Transformer, the decoder layer has a third sub-layer which performs multi-head attention over the output of the encoder stack (encoder-decoder-attention) and a mask to prevent positions from attending to subsequent positions. In our implementation of Macaron decoder, we also use masks and split the FFN into two sub-layers and thus our decoder layer is (FFN, self-attention, encoder-decoder-attention and FFN).\nTo make a fair comparison, we set the dimensionality of the inner-layer of the two FFN sub-layers in the Macaron layers to two times of the dimensionality of the hidden states. By doing this, the base, big and small Macaron have the same number of parameters as the base, big and small Transformer respectively.\nOptimizer and training We use the Adam optimizer and follow the optimizer setting and learning rate schedule in Vaswani et al. (2017). For the big setting, we enlarge the batch size and learning rate as suggested in Ott et al. (2018) to accelerate training. We employ label smoothing of value ls = 0.1 (Szegedy et al., 2016) in all experiments. Models for WMT14/IWSLT14 are trained on 4/1 NVIDIA P40 GPUs respectively. Our code is based on the open-sourced fairseq (Gehring et al., 2017) code base in PyTorch toolkit.\nEvaluation We use BLEU5 (Papineni et al., 2002) as the evaluation measure for machine translation. Following common practice, we use tokenized case-sensitive BLEU and case-insensitive BLEU for WMT14 En-De and IWSLT14 De-En respectively. During inference, we use beam search with beam size 4 and length penalty 0.6 for WMT14, and beam size 5 and length penalty 1.0 for IWSLT14, following Vaswani et al. (2017).\n5https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl\nB.2 UNSUPERVISED PRETRAINING\nPre-training dataset We follow Devlin et al. (2018) to use English Wikipedia corpus and BookCorpus for pre-training. As the dataset BookCorpus (Zhu et al., 2015) is no longer freely distributed. We follow the suggestions from Devlin et al. (2018) to crawl and collect BookCorpus6 on our own. The concatenation of two datasets includes roughly 3.4B words in total, which is comparable with the data corpus used in Devlin et al. (2018). We first segment documents into sentences with Spacy;7 Then, we normalize, lower-case, and tokenize texts using Moses (Koehn et al., 2007) and apply BPE (Sennrich et al., 2016). We randomly split documents into one training set and one validation set. The training-validation ratio for pre-training is 199:1.\nModel We compare our proposed Macaron Net with the base setting from the original paper (Devlin et al., 2018), which consists of 12 Transformer layers. The size of hidden states and embeddings are set to 768, and the number of attention heads is set to 12. Similarly, we replace the Transformer layers in BERT base by the Macaron layers and reduce the dimensionality of the inner-layer of the two FFN sub-layers by half, and thus we keep the number of parameters of our Macaron base as the same as BERT base.\nOptimizer and training We follow Devlin et al. (2018) to use two tasks to pretrain our model. One task is masked language modeling, which masks some percentage of the input tokens at random, and then requires the model to predict those masked tokens. Another task is next sentence prediction, which requires the model to predict whether two sentences in a given sentence pair are consecutive. We use the Adam optimizer and follow the optimizer setting and learning rate schedule in Devlin et al. (2018) and trained the model on 4 NVIDIA P40 GPUs.\nB.3 GLUE DATASET\nWe provide a brief description of the tasks in the GLUE benchmark (Wang et al., 2019) and our fine-tuning process on the GLUE datasets.\nCoLA The Corpus of Linguistic Acceptability (Warstadt et al., 2018) consists of English acceptability judgments drawn from books and journal articles on linguistic theory. The task is to predict whether an example is a grammatical English sentence. The performance is evaluated by Matthews correlation coefficient (Matthews, 1975).\nSST-2 The Stanford Sentiment Treebank (Socher et al., 2013) consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence (positive/negative). The performance is evaluated by the test accuracy.\nMRPC The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent, and the task is to predict the equivalence. The performance is evaluated by both the test accuracy and the test F1.\nSTS-B The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5; the task is to predict these scores. The performance is evaluated by Pearson and Spearman correlation coefficients.\nQQP The Quora Question Pairs8 (Chen et al., 2018b) dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. The performance is evaluated by both the test accuracy and the test F1.\n6https://www.smashwords.com/ 7https://spacy.io 8https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs\nMNLI The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment ), contradicts the hypothesis (contradiction), or neither (neutral). The performance is evaluated by the test accuracy on both matched (in-domain) and mismatched (cross-domain) sections of the test data.\nQNLI The Question-answering NLI dataset is converted from the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) to a classification task. The performance is evaluated by the test accuracy.\nRTE The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges (Bentivogli et al., 2009). The task is to predict whether sentences in a sentence pair are entailment. The performance is evaluated by the test accuracy.\nWNLI The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. We follow Devlin et al. (2018) to skip this task in our experiments because few previous works do better than predicting the majority class for this task.\nFine-tuning on GLUE tasks To fine-tune the models, following Devlin et al. (2018), we search the optimization hyperparameters in a search space including different batch sizes (16/32), learning rates (5e-3/3e-5), number of epochs (3/4/5), and a set of different random seeds. We select the model for testing according to their performance on the development set.\nTest data Note that the GLUE dataset distribution does not include the Test labels, and we only made a single GLUE evaluation server submission9 for each of our models.\n9https://gluebenchmark.com" } ]
2,019
null
SP:fc98effb95b87ad325f609c31b336c7dafd9ac30
[ "This paper proposes a novel deep reinforcement learning algorithm at the intersection of model-based and model-free reinforcement learning: Risk Averse Value Expansion (RAVE). Overall, this work represents a significant but incremental step forwards for this \"hybrid\"-RL class of algorithms. However, the paper itself has significant weaknesses in its writing, analysis, and presentation of ideas.", "This paper expands on previous work on hybrid model-based and model-free reinforcement learning. Specifically, it expands on the ideas in Model-based Value Expansion (MVE) and Stochastic Ensemble Value Expansion (STEVE) with a dynamically-scaled variance bias term to increase risk aversion over the course of learning, which the authors call Risk Averse Value Expansion (RAVE). Experimental results indicate notable improvements over their selected model-free and hybrid RL baselines on continuous control tasks in terms of initial learning efficiency (how many environment steps are needed to achieve a particular level of performance), asymptotic performance (how high the performance is given the same large number of environment steps), and avoidance of negative outcomes (how infrequently major negative outcomes are encountered over the course of training)." ]
Model-based Reinforcement Learning(RL) has shown great advantage in sampleefficiency, but suffers from poor asymptotic performance and high inference cost. A promising direction is to combine model-based reinforcement learning with model-free reinforcement learning, such as model-based value expansion(MVE). However, the previous methods do not take into account the stochastic character of the environment, thus still suffers from higher function approximation errors. As a result, they tend to fall behind the best model-free algorithms in some challenging scenarios. We propose a novel Hybrid-RL method, which is developed from MVE, namely the Risk Averse Value Expansion(RAVE). In the proposed method, we use an ensemble of probabilistic models for environment modeling to generate imaginative rollouts, based on which we further introduce the aversion of risks by seeking the lower confidence bound of the estimation. Experiments on different environments including MuJoCo and robo-school show that RAVE yields state-of-the-art performance. Also we found that it greatly prevented some catastrophic consequences such as falling down and thus reduced the variance of the rewards.
[]
[ { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Roberto Calandra", "André Seyfarth", "Jan Peters", "Marc P. Deisenroth" ], "title": "Bayesian optimization for learning gaits under uncertainty", "venue": "Annals of Mathematics and Artificial Intelligence (AMAI),", "year": 2016 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Will Dabney", "Georg Ostrovski", "David Silver", "Rémi Munos" ], "title": "Implicit quantile networks for distributional reinforcement learning", "venue": "arXiv preprint arXiv:1806.06923,", "year": 2018 }, { "authors": [ "Will Dabney", "Mark Rowland", "Marc G Bellemare", "Rémi Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Vladimir Feinberg", "Alvin Wan", "Ion Stoica", "Michael I Jordan", "Joseph E Gonzalez", "Sergey Levine" ], "title": "Model-based value estimation for efficient model-free reinforcement learning", "venue": "arXiv preprint arXiv:1803.00101,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke Van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": null, "year": 2018 }, { "authors": [ "Javier Garcıa", "Fernando Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Chris Gaskett" ], "title": "Reinforcement learning under circumstances beyond its control", "venue": null, "year": 2003 }, { "authors": [ "Stuart Geman", "Elie Bienenstock", "René Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Ilya Sutskever", "Sergey Levine" ], "title": "Continuous deep q-learning with model-based acceleration", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Matthias Heger" ], "title": "Consideration of risk in reinforcement learning", "venue": "In Machine Learning Proceedings", "year": 1994 }, { "authors": [ "Gabriel Kalweit", "Joschka Boedecker" ], "title": "Uncertainty-driven imagination for continuous deep reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Philipp Moritz", "Robert Nishihara", "Stephanie Wang", "Alexey Tumanov", "Richard Liaw", "Eric Liang", "Melih Elibol", "Zongheng Yang", "William Paul", "Michael I Jordan" ], "title": "Ray: A distributed framework for emerging {AI} applications", "venue": "In 13th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2018 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee" ], "title": "Value prediction network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Xinlei Pan", "Daniel Seita", "Yang Gao", "John Canny" ], "title": "Risk averse robust adversarial reinforcement learning", "venue": "arXiv preprint arXiv:1904.00511,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Sébastien Racanière", "Théophane Weber", "David Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomenech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "D Reddy", "Amrita Saha", "Srikanth G Tamilselvam", "Priyanka Agrawal", "Pankaj Dayama" ], "title": "Risk averse reinforcement learning for mixed multi-agent environments", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Makoto Sato", "Shigenobu Kobayashi" ], "title": "Variance-penalized reinforcement learning for risk-averse asset allocation", "venue": "In International Conference on Intelligent Data Engineering and Automated Learning,", "year": 2000 }, { "authors": [ "Makoto Sato", "Hajime Kimura", "Shibenobu Kobayashi" ], "title": "Td algorithm for the variance of return and mean-variance reinforcement learning", "venue": "Transactions of the Japanese Society for Artificial Intelligence,", "year": 2001 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Formal theory of creativity, fun, and intrinsic motivation (1990–2010)", "venue": "IEEE Transactions on Autonomous Mental Development,", "year": 2010 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Elena Smirnova", "Elvis Dohmatob", "Jérémie Mary" ], "title": "Distributionally robust reinforcement learning", "venue": "arXiv preprint arXiv:1902.08708,", "year": 2019 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM SIGART Bulletin,", "year": 1991 }, { "authors": [ "Sebastian Thrun", "Anton Schwartz" ], "title": "Issues in using function approximation for reinforcement learning", "venue": "In Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum,", "year": 1993 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Wu", "Dani Yogatama", "Julia Cohen", "Katrina McKinney", "Oliver Smith", "Tom Schaul", "Timothy Lillicrap", "Chris Apps", "Koray Kavukcuoglu", "Demis Hassabis", "David Silver" ], "title": "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/, 2019", "venue": null, "year": 2019 }, { "authors": [ "Théophane Weber", "Sébastien Racanière", "David P Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomenech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "arXiv preprint arXiv:1707.06203,", "year": 2017 }, { "authors": [ "Ray(Moritz" ], "title": "2018) to transfer data and parameters between the actors and the learner. We have 8 actors generating data, and deploy the learner on the GPUs. DDPG uses a GPU, and model-based methods uses two: one for the training of the policy and another for the dynamics model. Figure 6: An illustration of the rollout result with N", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In contrast to the tremendous progress made by model-free reinforcement learning algorithms in the domain of games(Mnih et al., 2015; Silver et al., 2017; Vinyals et al., 2019), poor sample efficiency has risen up as a great challenge to RL, especially when interacting with the real world. Toward this challenge, a promising direction is to integrate the dynamics model to enhance the sample efficiency of the learning process(Sutton, 1991; Calandra et al., 2016; Kalweit & Boedecker, 2017; Oh et al., 2017; Racanière et al., 2017). However, classic model-based reinforcement learning(MBRL) methods tend to lag behind the model-free methods(MFRL) asymptotically, especially in cases of noisy environments and long trajectories. The hybrid combination of MFRL and MBRL(Hybrid-RL for short) has attracted much attention due to this reason. A lot of efforts has been devoted to this field, including the dyna algorithm(Kurutach et al., 2018), model-based value expansion(Feinberg et al., 2018), I2A(Weber et al., 2017), etc.\nThe robustness of the learned policy is another concern in RL. For stochastic environments, policy can be vulnerable to tiny disturbance and occasionally drop into catastrophic consequences. In MFRL, off-policy RL(such as DQN, DDPG) typically suffers from such problems, which in the end leads to instability in the performance including sudden drop in the rewards. To solve such problem, risk sensitive MFRL not only maximize the expected return, but also try to reduce those catastrophic outcomes(Garcıa & Fernández, 2015; Dabney et al., 2018a; Pan et al., 2019). For MBRL and Hybrid-RL, without modeling the uncertainty in the environment(especially for continuous states and actions), it often leads to higher function approximation errors and poorer performances. It is proposed that complete modeling of uncertainty in transition can obviously improve the performance(Chua et al., 2018), however, reducing risks in MBRL and Hybrid-RL has not been sufficiently studied yet.\nIn order to achieve both sample efficiency and robustness at the same time, we propose a new Hybrid-RL method more capable of solving stochastic and risky environments. The proposed method, namely Risk Averse Value Expansion(RAVE), is an extension of the model-based value expansion(MVE)(Feinberg et al., 2018) and stochastic ensemble value expansion(STEVE)(Buckman\net al., 2018). We systematically analyse the approximation errors of different methods in stochastic environments. We borrow ideas from the uncertainty modeling( Chua et al. (2018)) and risk averse reinforcement learning. The probabilistic ensemble environment model is used, which captures not only the variance in estimation(also called epistemic uncertainty), but also stochastic transition nature of the environment(also called aleatoric uncertainty). Utilizing the ensemble of estimations, we further adopt a dynamic confidence lower bound of the target value function to make the policy more risk-sensitive. We compare RAVE with prior MFRL and Hybrid-RL baselines, showing that RAVE not only yields SOTA expected performance, but also facilitates the robustness of the policy." }, { "heading": "2 RELATED WORKS", "text": "The model-based value expansion(MVE)(Feinberg et al., 2018) is a Hybrid-RL algorithm. Unlike typical MFRL such as DQN that uses only 1 step bootstrapping, MVE uses the imagination rollouts of length H to predict the target value. The assistance of environment model can greatly improve the sample efficiency at the start, but the precision of long term inference becomes limited asymptotically. In order to properly balance the contribution of the value expansion of different horizons, stochastic ensemble value expansion(STEVE)(Buckman et al., 2018) adopts an interpolation of value expansion of different horizon. The accuracy of the expansion is estimated through the ensemble of environment models as well as value functions. Ensemble of environment models also models the uncertainty to some extent, however, ensemble of deterministic model captures mainly epistemic uncertainty instead of stochastic transitions(Chua et al., 2018).\nThe uncertainty or the function approximation error is typically divided into three classes(Geman et al., 1992): the noise exists in the objective environment, e.g., the stochastic transitions, which is also called aleatoric uncertainty(Chua et al., 2018). The model bias is the error produced by the limited expressive power of the approximating function, which is measured by the expectation of ground truth and the prediction of the model, in case that infinite training data is provided. The variance is the uncertainty brought by insufficient training data, which is also called epistemic uncertainty. Dabney et al. (2018b) discuss the epistemic and aleatoric uncertainty in their work and focus on the latter one to improve the distributional RL. Recent work suggests that ensemble of probabilistic model(PE) is considered as more thorough modeling of uncertainty(Chua et al., 2018), while simply aggregate deterministic model captures only variance or epistemic uncertainty. The stochastic transition is more related to the noise(or aleatoric uncertainty), and the epistemic uncertainty is usually of interest to many works in terms of exploitation&exploration(Pathak et al., 2017; Schmidhuber, 2010; Oudeyer & Kaplan, 2009). Other works adopt ensemble of deterministic value function for exploration(Osband et al., 2016; Buckman et al., 2018).\nRisks in RL typically refer to the inherent uncertainty of the environment and the fact that policy may perform poorly in some cases(Garcıa & Fernández, 2015). Risk sensitive learning requires not only maximization of expected rewards, but also lower variances and risks in performance. Toward this object, some works adopt the variance of the return(Sato et al., 2001; Pan et al., 2019; Reddy et al., 2019), or the worst-case outcome(Heger, 1994; Gaskett, 2003) in either policy learning(Pan et al., 2019; Reddy et al., 2019), exploration(Smirnova et al., 2019), or distributional value estimates(Dabney et al., 2018a). An interesting issue in risk reduction is that reduction of risks is typically found to be conflicting with exploration and exploitation that try to maximize the reward in the long run. Authors in (Pan et al., 2019) introduce two adversarial agents(risk aversion and longterm reward seeking) that act in combination to solve this problem. Still, it remains quite tricky and empiristic to trade-off between risk-sensitive and risk-seeking(exploration) in RL. In this paper, we propose a dynamic confidence bound for this purpose.\nA number of prior works have studied function approximation error that leads to overestimation and sub-optimal solution in MFRL. Double DQN(DDQN)(Van Hasselt et al., 2016) improves over DQN through disentangling the target value function and the target policy that pursues maximum value. In TD3(Fujimoto et al., 2018) the authors suggest that systematic overestimation of value function also exists in actor-critic MFRL. They use an ensemble of two value functions, with the minimum estimate being used as the target value. Selecting the lower value estimation is similar to using uncertainty or lower confidence bound which is adopted by the other risk sensitive methods(Pan et al., 2019), though they claimed different motivations." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 ACTOR-CRITIC MODEL-FREE REINFORCEMENT LEARNING", "text": "The Markov Decision Processes(MDP) is used to describe the process of an agent interacting with the environment. The agent selects the action at ∈ A at each time step t. After executing the action, it receives a new observation st+1 ∈ S and a feedback rt ∈ R from the environment. As we focus mainly on the environments of continuous action, we denote the parametric deterministic policy that the agent uses to decide its action as at = µθ(st). Typically we add Gaussian exploration noise on top of the deterministic policy, such that we have a stochastic behavioral policy πθ,σ : S×A → R. It is calculated as πθ,σ(st, at) = pN (at|µθ(st), σ2), where pN (x|m,σ2) represents the probability density at x in a Gaussian distribution N (m,σ2). As the interaction process continues, the agent generates a trajectory τ = (s0, a0, r0, s1, a1, r1, ...) following the policy πθ,σ. For finite horizon MDP, we use the indicator d : S → {0, 1} to mark whether the episode is terminated. The objective of RL is to find the optimal policy π∗ to maximize the expected discounted sum of rewards along the trajectory. The value performing the action a with policy π at the state s is defined by Qπ(s, a) = Es0=s,a0=a,τ∼π ∑∞ t=0 γ\ntrt, where 0 < γ < 1 is the discount factor. The value iteration in model-free RL tries to approximate the optimal valueQπ ∗ with a parametric value function Q̂φ by minimizing the Temporal Difference(TD) error, where φ is the parameter to be optimized. The TD error between the estimates of Q-value and the corresponding target values is shown in equation. 1, where φ′ is a delayed copy of the parameter φ, and a′ ∼ πθ′ , with θ′ being a delayed copy of θ(Lillicrap et al., 2015).\nL φ = Eτ [∑ t (Q̂target(rt, st+1)− Q̂φ(st, at))2 ]\nwith Q̂target(rt, st+1) = rt + γ · (1− d(st+1)) · Q̂φ′(st+1, a′))\n(1)\nTo optimize the deterministic policy function in a continuous action space, deep deterministic policy gradient(DDPG)(Lillicrap et al., 2015) maximizes the value function (or minimizes the negative value function) under the policy µθ with respect to parameter θ, shown in equation. 2.\nL θ = −Eτ [ ∑ t Q̂φ′(st, µθ(st))] (2)" }, { "heading": "3.2 ENVIRONMENT MODELING", "text": "To model the environment in continuous space, an environment model is typically composed of three individual mapping functions: f̂r,ζr : S × A× S → R, f̂s,ζs : S × A → S , and f̂d,ζd : S → [0, 1], which are used to approximate the feedback, next state and probability of the terminal indicator respectively(Gu et al., 2016; Feinberg et al., 2018). Here ζr, ζs and ζd are used to represent the parameters of the corresponding mapping functions. With the environment model, starting from st, at, we can predict the next state and reward by\nŝt+1 = f̂s,ζs(st, at), r̂t = f̂r,ζr (st, at, ŝt+1), d̂t+1 = f̂d,ζd(ŝt+1), (3)\nand this process might go on to generate a complete imagined trajectory of [st, at, r̂t, ŝt+1, ...].\nThe neural network is commonly used as an environment model due to its powerful express ability. To optimize the parameter ζ we need to minimize the mean square error(or the cross entropy) of the prediction and the ground truth, given the trajectories τ under the behavioral policy." }, { "heading": "3.3 UNCERTAINTY AWARE PREDICTION", "text": "The deterministic model approximates the expectation only. As we mentioned in the previous sections, overestimation is attributed to the error in function approximation. Following Chua et al. (2018), we briefly review different uncertainty modeling techniques.\nProbabilistic models output a distribution (e.g., mean and variance of a Gaussian distribution) instead of an expectation. We take the reward component of the environment model as an example,\nthe probabilistic model is written as r ∼ N (f̂r,ζr , σ̂2r,ζr ), and the loss function is the negative log likelihood(equation. 4).\nL ζr\n= −Eτ [ log pN (rt|f̂r,ζr (st, at, st+1), σ̂2r,ζr (st, at, st+1)) ] (4)\nEnsemble of deterministic(DE) models maintains an ensemble of parameters, which is typically trained with different training samples and different initial parameters. E.g, given the ensemble of parameters ζr,1, ζr,2, ..., ζr,N , the expectation and the variance of the prediction is acquired from equation. 6\nÊζr [ f̂r,ζr ] = 1\nN ∑ i f̂r,ζr,i , V̂ζr [ f̂r,ζr ] = 1 N ∑ i (f̂r,ζr,i − Êζr [ f̂r,ζr ] )2 (5)\nWe define the operator Êx and V̂x as the average and the variance on the ensemble of xes respectively. As proposed by (Chua et al., 2018), the variance σ̂2 in equation. 4 mainly captures the aleatoric uncertainty, and the variance V̂ mainly captures the epistemic uncertainty.\nEnsemble of probabilistic models(PE) keeps track of an collection of distributions {N (f̂r,ζr,i , σ̂2r,ζr,i)}, i ∈ [1, N ], which can further give the estimation of both uncertainties. A sampling form PE goes as follow\nSample i unformly from [1, N ], then sample r ∼ N (f̂r,ζr,i , σ̂2r,ζr,i) (6)" }, { "heading": "3.4 MODEL-BASED VALUE EXPANSION", "text": "In MVE(Feinberg et al., 2018) the learned environment model f̂ζr,ζs,ζd = (f̂s,ζs , f̂r,ζr , f̂d,ζd) together with the policy µθ are used to image a trajectory starting from state st and action at, which is represented by τ̂ζr,ζs,ζd,θ′,H(rt, st+1). It produces a trajectory up to horizon H(H ≥ 0). We can write τ̂ζr,ζs,ζd,θ′,H(rt, st+1) = (rt, st+1, ât+1, r̂t+1, ŝt+2, ..., ŝt+H+1, ât+H+1).\nThen the target value Q̂target in equation. 2 is replaced with estimated return Q̂MVEζr,ζs,ζd,θ′,φ′,H on the sampled trajectory τ̂ζr,ζs,ζd,θ′,H(rt, st+1), which is expressed in equation 7.\nQ̂target(rt, st+1)← Q̂MVEζr,ζs,ζd,θ′,φ′,H(rt, st+1)\n= rt + t+H∑ t′=t+1 γt ′−tdt,t′ r̂t′ + γ H+1dt,t+H+1Q̂φ′(ŝt+H+1, ât+H+1)\nwith dt,t′ = (1− d(st+1)) t′∏\nk=t+2\n(1− f̂d,ζd(ŝk))\n(7)" }, { "heading": "3.5 STOCHASTIC ENSEMBLE VALUE EXPANSION", "text": "Selecting proper horizon H for value expansion is important to achieve high sample efficiency and asymptotic accuracy at the same time. Though the increase of H brings increasing prophecy, the asymptotic accuracy is sacrificed due to the increasing reliance on the environment model. In STEVE(Buckman et al., 2018), interpolating the value expansions Q̂MVEζr,ζs,ζd,θ′,φ′,H of different H ∈ [0, Hmax] is proposed. The weight for each horizon is decided by the inverse of its variance, which is calculated by estimating an ensemble of values switching the combination of parameters in environment model f̂ and value function Q̂φ′ . Through our notation, STEVE can be written as equation. 8.\nQ̂STEVE(rt, st+1) =\n∑Hmax H=0 ωH Êζr,ζs,ζd,φ′ [ Q̂MVEζr,ζs,ζd,θ′,φ′,H(rt, st+1) ] ∑Hmax H=0 ωH ,\nwith ωH = V̂ζr,ζs,ζd,φ′ [ Q̂MVEζr,ζs,ζd,θ′,φ′,H(rt, st+1) ]−1 (8)" }, { "heading": "4 INVESTIGATION OF THE APPROXIMATION ERROR IN STOCHASTIC ENVIRONMENTS", "text": "To thoroughly investigate the impact of aleatoric uncertainty on hybri-RL methods, we construct a demonstrative toy environment(fig. 1(a)). The agent starts from s0 = 0, chooses an action at from A = [−1,+1] at each time step t. The transition of the environment is st+1 = st + at|at| + k · N (0, 1). We compare two different environments:k = 0 and k = 1, where k = 0 represents the deterministic transition, and k = 1 represents the stochastic transition. The episode terminates at (|s| > 5 ), where the agent acquires a final reward. The agent gets a constant penalty(-100) at each time step to encourage it to reach the terminal state as soon as possible. Note that the deterministic environment actually requires more steps in expectation to reach |s > 5| compared with the stochastic environment, thus the value function at the starting point of k = 1 (Ground truth = 380+) tends to be lower than that of k = 0(Ground truth = 430+).\nWe apply the learning methods including DDPG, MVE, STEVE to this environment, and plot the changes of estimate values at the starting point(see fig. 5).\nThe results show that, in the deterministic environment, the Q-values estimated by all methods converge to the ground-truth asymptotically in such a simple environment. However, after adding the noise, previous MFRL and Hybrid-RL methods show various level of overestimation. The authors of (Feinberg et al., 2018) have claimed that value expansion improves the quality of estimated values, but MVE and STEVE actually give even worse prediction than model-free DDPG in the stochastic environment. A potential explanation is that the overall overestimation comes from the unavoidable imprecision of the estimator(Thrun & Schwartz, 1993; Fujimoto et al., 2018), but Hybrid-RL also suffers from the approximation error of the dynamics model. When using a deterministic environment model, the predictive transition of both environments would be identical, because the deterministic dynamics model tends to approximate the expectation of next states(e.g, f̂s,ζs(st = 0, at > 0) = 1.0, f̂s,ζs(st = 1.0, at > 0) = 2.0). This would result in the same value estimation for k = 0 and k = 1 for both value expansion methods, but the ground truth of Q-values are different in these two environments. As a result, the deterministic environment introduces additional approximation error, leading to more severe overestimation." }, { "heading": "5 METHODOLOGY", "text": "" }, { "heading": "5.1 RISK AVERSE VALUE EXPANSION", "text": "We proposed mainly two improvements based on MVE and STEVE. Firstly, we apply an ensemble of probabilistic models (PE) to enable the environment model to capture the uncertainty over possible trajectories. Secondly, inspired by risk sensitive RL, we calculate the confidence lower bound of the target value(fig 1(c)).\nBefore introducing RAVE we start with the Distributional Value Expansion(DVE). Compared with MVE that uses a deterministic environment model and value function, DVE uses a probabilistic environment model, and we independently sample the reward and the next state using the probabilistic environment models(see equation 9 and fig. 2).\ns̃t+2 ∼ N (f̂s,ζs(st+1, ãt+1 = µθ′(st+1)), σ̂2s,ζs(st+1, ãt+1))\nr̃t+1 ∼ N (f̂r,ζr (st+1, ãt+1, s̃t+2), σ̂2r,ζr (st+1, ãt+1, s̃t+2))\nd̃(s̃t+2) ∼ N (f̂d,ζd(s̃t+2), σ̂2d,ζd(s̃t+2))\n(9)\nWe apply the distributional expansion starting from rt, st+1 to acquire the trajectory τ̃ζr,ζs,ζd,θ′,H(rt, st+1) = (rt, st+1, ãt+1, r̃t+1, s̃t+2, ..., s̃t+H+1, ãt+H+1), based on which we write DVE as equation. 10.\nQ̂DVEζr,ζs,ζd,θ′,φ′,H(rt, st+1)\n= rt + t+H∑ t′=t+1 γt ′−tdt,t′ r̃t′ + γ H+1dt,t+H+1Q̂φ′(s̃t+H+1, ãt+H+1)\nwith dt,t′ = (1− d(st+1)) t′∏\nk=t+2\n(1− d̃(s̃k))\n(10)\nWe then keep track of an ensemble of the combination of the parameters {ζr, ζs, ζd, φ′}. For each group of parameters we use an ensemble of N parameters, which gives us 4N parameters in all. Then we select a random combination of four integers {i, j, k, l}, which gives the parameter combination of {ζr,i, ζs,j , ζd,k, φ′l}. By switching the combination of integers we acquire an ensemble of DVE estimation. Then we count the average and the variance on the ensemble of DVE, and by subtracting a certain proportion(α) of the standard variance, we acquire a lower bound of DVE estimation. We call this estimation of value function the α-confidence lower bound(α-CLB), written as equation. 11.\nQ̂α−CLBH (rt, st+1) = Êζr,ζs,ζd,φ′ [ Q̂DVEζr,ζs,ζd,θ′,φ′,H(rt, st+1) ] − α √ V̂ζr,ζs,ζd,φ′ [ Q̂DVEζr,ζs,ζd,θ′,φ′,H(rt, st+1)\n] (11) Subtraction of variances is commonly used in risk-sensitive RL(Sato & Kobayashi, 2000; Pan et al., 2019; Reddy et al., 2019). The motivation is straight forward, we try to suppress the utility of the\nhigh-variance trajectories, in order to avoid possible risks. However, the α here is left undecided. We will come to this problem in the next part.\nFinally, we define RAVE, which adopts the similar interpolation among different horizons as STEVE based on DVE and CLB, shown in equation. 13.\nQ̂target(rt, st+1)← Q̂RAVE(rt, st+1) = ∑Hmax H=0 ωHQ̂ α−CLB H (rt, st+1)∑Hmax\nH=0 ωH ,\nwith ωH = V̂ζr,ζs,ζd,φ′ [ Q̂DVEζr,ζs,ζd,θ′,φ′,H(rt, st+1) ]−1 (12) While adopting the lower confidence bound may introduce the bias of underestimation, it makes the policy less preferable to actions with large variance of future return." }, { "heading": "5.2 ADAPTIVE CONFIDENCE BOUND", "text": "An unsolved problem in RAVE is to select proper α. The requirement of risk aversion and exploration is somehow competing: risk aversion seek to minimize the variance, while exploration searches states with higher variance. To satisfy both requirements, previous work proposed two competing agents, and each will make decision for a short amount of time(Pan et al. (2019)). Here we propose another solution to this problem. We argue that the agent needs to aggressively explore at the beginning, and it should get more risk sensitive as the model converges. A key indicator of this is the epistemic uncertainty. The epistemic uncertainty measures how well our model get to know the state space. In MBRL and Hybrid-RL, there is a common technique to easily monitor the epistemic uncertainty, by evaluating the ability of the learned environment model to predict the consequence of its own actions(Pathak et al., 2017).\nFollowing this motivation, we set the confidence bound factor to be related to its current state and action, denoted as α(s, a). We want α(s, a) to be larger when the environment model could perfectly predict the state to get more risk sensitive, and smaller when the prediction is noisy to allow more exploration. We have\nα(st, at) = max{0, α(1.0− 1\nZ ||Eζs [f̂s,ζs(st, at)]− st+1||2)}, (13)\nwhere Z is a scaling factor for the prediction error. With a little abuse of notations, we use α here to represent a constant hyperparameter, and α(s, a) is the factor that is actually used in α-CLB. α(st, at) picks the value near zero at first, and gradually increases to α with the learning process." }, { "heading": "6 EXPERIMENTS AND ANALYSIS", "text": "We evaluate RAVE on continuous control environments using the MuJoCo physics simulator(Todorov et al., 2012). The baselines includes the model-free DDPG and STEVE that currently yields the SOTA Hybrid-RL performance in MuJoCo. We also align our performance with the SOTA MFRL methods including twin delayed deep deterministic (TD3) policy gradient algorithm(Fujimoto et al., 2018), soft actror-critic(SAC) algorithm(Haarnoja et al., 2018), and proximal policy optimization(PPO)(Schulman et al., 2017), using the implementation provided by the authors. To further demonstrate the robustness in complex environments, we also evaluate RAVE on OpenAI’s Roboschool (Klimov & Schulman, 2017), where STEVE has shown a large improvement than the other baselines.We detail hyper-parameters and the implementation in the supplementary materials." }, { "heading": "6.1 EXPERIMENTAL RESULTS", "text": "We carried out experiments on eight environments shown in fig. 3. Among the compared methods, PPO is the only on-policy updating method, which has very poor sample-efficiency compared with either STEVE or off-policy MFRL methods, as PPO needs a large batch size to learn stably(Haarnoja et al., 2018). DDPG achieves quite high performance in HalfCheetah-v1 and Swimmer-v1, but fails on almost all the other environments, especially on challenging environments such as Humanoid.\nIn Hopper-v1 and Walker2d-v1, STEVE can not compare with TD3 and SAC, which yields quite good performance in many environments. However, RAVE performed favorably in most environments in both asymptotic performance and the rising speed(meaning sample efficiency), except for HalfCheetah-v1 and Swimmer-v1, where DDPG has already achieved satisfying performance and the margin between DDPG and Hybrid-RL is not that large." }, { "heading": "6.2 ANALYSIS", "text": "Distribution of Value Function Approximation. To investigate whether the proposed method predicts value function more precisely, we plot the of the predicted values Q̂ against the ground truth values of Hopper-v1 in fig. 4. The ground truth value here is calculated by directly adding the rewards of the left trajectory, thus it is more like a monte carlo sampling from ground truth distribution, which is quite noisy. To better demonstrate the distribution of points, we draw the confidence ellipses representing the density. The points are extracted from the model at environment steps of 1M. In DDPG and STEVE, the predicted value and ground truth aligned poorly with the ground truth, while RAVE yields better alignments, though a little bit of underestimation.\n.\nInvestigation on dynamic confidence bound. In order to study the role played by the α-confidence lower bound separately, we further carried out series of ablative experiments in Hopper-v1 environment. We compare RAVE(α =constant), RAVE(dynamic α(s, a)) and other baselines in chart. For all the algorithms, we set Hmax = 3, and the experiments are replicated for 4 times.\nFrom fig. 5(a) we can see that α = 0(which means ensemble of DVE only) already surpasses the performance of STEVE in Hopper-v1, showing that modeling aleatoric uncertainty through PE indeed benefits the performance of value expansion. Larger margin is attained by introducing αCLB. A very large α(such as constant α = 2.0, which means lower CLB) can quickly stabilize the performance, but its performance stayed low due to lack of exploration, while a smaller α(constant α = 0.0, 0.5 generates larger fluctuation in performance. The dynamic adjustment of α facilitates quick rise and stable performance.\nAnalysis on Robustness. We also investigate the robustness of RAVE and baselines on the most challenging Mujoco environment, Humanoid-v1. Humanoid-v1 involve complex humanoid dynamics, where the agent is prone to fall. We evaluate the robustness with the possibility of falling by the learned policy. As shown in fig.5(b), RAVE achieves the lowest falling rate compared with the baselines.\nComputational Complexity. A main concern toward RAVE may be its computational complexity. On the one hand, hybrid-RL involves the usage of environment model which introduces additional computational cost. On the other hand, RAVE and STEVE involves ensemble of trajectory rollouts, which is a little bit costly. We keep the ensemble size the same as STEVE and the details about the hyper-parameter can be found in the supplements.\nFor the training stages, the additional training cost of RAVE compared with STEVE comes from modeling aleatoric uncertainty and additional sampling cost. We tested the training speed of STEVE and RAVE, and the time for RAVE to finish training 500 batches with a batch size 512 is 13.20s, an increase of 24.29%, compared to STEVE(10.62s). The time reported here is tested in 2 P40 Nvidia GPUs with 8 CPUs(2.00GHz). For the inference stages, RAVE charges exactly the same computational resources just as the other model-free actor critic methods as long as the model architecture of the policy is equal, which is a lot more cost efficient compared with MBRL that adopts a planning procedure.\nAlso we want to emphasize here that the computation complexity is typically less important compared with sample efficiency, as the interaction cost matters more than computational cost in training procedure." }, { "heading": "7 CONCLUSION", "text": "In this paper, we raise the problem of incomplete modeling of uncertainty and insufficient robustness in model-based value expansion. We introduce ensemble of probabilistic models to approximate the environment, based on which we introduce the distributional value expansion(DVE), α-Confidence Lower Bound, which further leads to RAVE. Our experiments demonstrate the superiority of RAVE in both sample efficiency and robustness, compared with state-of-the-art RL methods, including the model-free TD3 algorithm and the Hybrid-RL STEVE algorithm. We hope that this algorithm will facilitate the application of reinforcement learning in real-world, complex and risky scenarios." }, { "heading": "A TRAINING AND IMPLEMENTATION DETAILS", "text": "A.1 NEURAL NETWORK STRUCTURE\nWe used rectified linear units (ReLUs) between all hidden layers of all our implemented algorithms. Unless otherwise stated, all the output layers of model have no activation function. RL Models. We implement model-based algorithms on top of DDPG, with a policy network and a Q-value network. The policy network is a stack of 4 fully-connected(FC) layers. The activation function of the output layer is tanh to constrain the output range of the network. The Q-value network takes the concatenation of the state st and the action at as input, followed by four FC layers.\nDynamics Models. We train three neural networks as the transition function, the reward function and the termination function. We build eight FC layers for the transition approximator, and four FC layers for the other approximators. The distributional models N (f̂ , σ̂2) in RAVE use the similar model structure except that there are two output layers corresponding to the mean and the variances respectively.\nA.2 PARALLEL TRAINING\nWe use distributed training to accelerate our algorithms and the baseline algorithms. Following the implementation of STEVE, we train a GPU learner with multiple actors deployed in a CPU cluster. The actors reload the parameters periodically from the learner, generate trajectories and send the trajectories to the learner. The learner stores them in the replay buffer, and updates the parameters with data randomly sampled from the buffer. For the network communication, we use Ray(Moritz et al., 2018) to transfer data and parameters between the actors and the learner. We have 8 actors generating data, and deploy the learner on the GPUs. DDPG uses a GPU, and model-based methods uses two: one for the training of the policy and another for the dynamics model.\nA.3 ROLLOUT DETAILS\nWe employ the identical method of target candidates computation as STEVE, except we image a rollout with an ensemble of probabilistic models . At first, we bind the parameters of transition model(ζs,i) to the termination model(ζd,i). That is, we numerate the combination of three integers {i, j, k|i, j, k ∈ [1, N ]}, which gives us an ensemble of N3 parameters {ζr,j , ζs,i, ζd,i φ′k}. The actual sampling process goes like this: For each H ∈ [0, Hmax], we first use the transition model(ζs,i) and the termination model(ζd,i) to image a state-action sequence {st+1, ãt+1, s̃t+2, ãt+2, ..., s̃t+H+1, ãt+H+1}; Based on the state-action sequence, we use reward function(ζr,j) to estimate the rewards (r̂t′ = f̂r,ζr,j (s̃t′ , ãt′ , s̃t′+1)) and the value function(φ ′ k) to predict the value of the last state(Q̂φ′k(s̃t+H+1, ãt+H+1))(fig. 6). In total we predict N 3(Hmax+1) combination of rewards and value functions in both RAVE and STEVE." }, { "heading": "B HYPER-PARAMETERS FOR TRAINING", "text": "We list all the hyper-parameters used in our experiments in table 1." } ]
2,019
null
SP:bddd3d499426725b02d3d67ca0a7f8ef0c30e639
[ "This paper presents a technique for encoding the high level “style” of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global “style embedding”. Additionally, the Music Transformer model is also conditioned on a combination of both “style” and “melody” embeddings to try and generate music “similar” to the conditioning melody but in the style of the performance embedding. ", "In this paper, the author extends the standard music Transformer into a conditional version: two encoders are evolved, one for encoding the performance and the other is used for encoding the melody. The output representation has to be similar to the input. The authors conduct experiments on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances to verify the proposed algorithm." ]
We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal dataset with 10,000+ hours of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines.
[ { "affiliations": [], "name": "TRANSFORMER AUTOENCODERS" } ]
[ { "authors": [ "Pierre Baldi" ], "title": "Autoencoders, unsupervised learning, and deep architectures", "venue": "In Proceedings of ICML workshop on unsupervised and transfer learning,", "year": 2012 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Jesse Engel", "Matthew Hoffman", "Adam Roberts" ], "title": "Latent constraints: Learning to generate conditionally from unconditional generative models", "venue": "arXiv preprint arXiv:1711.05772,", "year": 2017 }, { "authors": [ "Jesse Engel", "Cinjon Resnick", "Adam Roberts", "Sander Dieleman", "Mohammad Norouzi", "Douglas Eck", "Karen Simonyan" ], "title": "Neural audio synthesis of musical notes with wavenet autoencoders", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jesse Engel", "Kumar Krishna Agrawal", "Shuo Chen", "Ishaan Gulrajani", "Chris Donahue", "Adam Roberts" ], "title": "Gansynth: Adversarial neural audio synthesis", "venue": null, "year": 1902 }, { "authors": [ "Jon Gillick", "Adam Roberts", "Jesse Engel", "Douglas Eck", "David Bamman" ], "title": "Learning to groove with inverse sequence transformations", "venue": null, "year": 1905 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "David Ha", "Douglas Eck" ], "title": "A neural representation of sketch drawings", "venue": "arXiv preprint arXiv:1704.03477,", "year": 2017 }, { "authors": [ "Curtis Hawthorne", "Andriy Stasyuk", "Adam Roberts", "Ian Simon", "Cheng-Zhi Anna Huang", "Sander Dieleman", "Erich Elsen", "Jesse Engel", "Douglas Eck" ], "title": "Enabling factorized piano music modeling and generation with the MAESTRO dataset", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Cheng-Zhi Anna Huang", "Tim Cooijmans", "Adam Roberts", "Aaron Courville", "Douglas Eck" ], "title": "Counterpoint by convolution", "venue": "arXiv preprint arXiv:1903.07227,", "year": 2019 }, { "authors": [ "Cheng-Zhi Anna Huang", "Ashish Vaswani", "Jakob Uszkoreit", "Ian Simon", "Curtis Hawthorne", "Noam Shazeer", "Andrew M. Dai", "Matthew D. Hoffman", "Monica Dinculescu", "Douglas Eck" ], "title": "Music transformer: Generating music with long-term structure", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hsiao-Tzu Hung", "Chung-Yang Wang", "Yi-Hsuan Yang", "Hsin-Min Wang" ], "title": "Improving automatic jazz melody generation by transfer learning techniques", "venue": null, "year": 1908 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Łukasz Kaiser", "Samy Bengio" ], "title": "Discrete autoencoders for sequence models", "venue": "arXiv preprint arXiv:1801.09797,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Nicholas Meade", "Nicholas Barreyre", "Scott C Lowe", "Sageev Oore" ], "title": "Exploring conditioning for generative music systems with human-interpretable controls", "venue": null, "year": 1907 }, { "authors": [ "Noam Mor", "Lior Wolf", "Adam Polyak", "Yaniv Taigman" ], "title": "A universal music translation network", "venue": "arXiv preprint arXiv:1805.07848,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Sageev Oore", "Ian Simon", "Sander Dieleman", "Douglas Eck", "Karen Simonyan" ], "title": "This time with feeling: learning expressive musical performance", "venue": "Neural Computing and Applications, pp", "year": 2018 }, { "authors": [ "Christine Payne" ], "title": "Musenet, 2019. URL https://openai.com/blog/musenet", "venue": null, "year": 2019 }, { "authors": [ "Adam Roberts", "Jesse Engel", "Colin Raffel", "Curtis Hawthorne", "Douglas Eck" ], "title": "A hierarchical latent vector model for learning long-term structure in music", "venue": "arXiv preprint arXiv:1803.05428,", "year": 2018 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey Hinton" ], "title": "Deep boltzmann machines", "venue": "In Artificial intelligence and statistics,", "year": 2009 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "arXiv preprint arXiv:1803.02155,", "year": 2018 }, { "authors": [ "Ian Simon", "Cheng-Zhi Anna Huang", "Jesse Engel", "Curtis Hawthorne", "Monica Dinculescu" ], "title": "Generating piano music with transformer. 2019", "venue": "URL https://magenta.tensorflow.org/ piano-transformer", "year": 2019 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Tianming Wang", "Xiaojun Wan" ], "title": "T-cvae: Transformer-based conditioned variational autoencoder for story completion", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "There has been significant progress in generative modeling, particularly with respect to creative applications such as art and music (Oord et al., 2016; Engel et al., 2017b; Ha & Eck, 2017; Huang et al., 2019a; Payne, 2019). As the number of generative applications increase, it becomes increasingly important to consider how users can interact with such systems, particularly when the generative model functions as a tool in their creative process (Engel et al., 2017a; Gillick et al., 2019) To this end, we consider how one can learn high-level controls over the global structure of a generated sample. We focus on symbolic music generation, where Music Transformer (Huang et al., 2019b) is the current state-of-the-art in generating high-quality samples that span over a minute in length.\nThe challenge in controllable sequence generation is the fact that Transformers (Vaswani et al., 2017) and their variants excel as language models or in sequence-to-sequence tasks such as translation, but it is less clear as to how they can: (1) learn and (2) incorporate global conditioning information at inference time. This contrasts with traditional generative models for images such as the variational autoencoder (VAE) (Kingma & Welling, 2013) or generative adversarial network (GAN) (Goodfellow et al., 2014) which typically incorprate global conditioning as part of their training procedure (Sohn et al., 2015; Sønderby et al., 2016; Isola et al., 2017; Van den Oord et al., 2016).\nIn this work, we introduce the Transformer autoencoder, where we aggregate encodings across time to obtain a holistic representation of the performance style. We show that this learned global representation can be incorporated with other forms of structural conditioning in two ways. First, we show that given a performance, our model can generate performances that are similar in style to the provided input. Then, we explore different methods to combine melody and performance representations to harmonize a melody in the style of the given performance. In both cases, we show that combining both global and fine-scale encodings of the musical performance allows us to gain better control of generation, separately manipulating both the style and melody of the resulting sample.\nEmpirically, we evaluate our model on two datasets: the publicly-available MAESTRO (Hawthorne et al., 2019) dataset, and an internal dataset of piano performances transcribed from 10,000+ hours of audio (Anonymous for review). We find that the Transformer autoencoder is able to generate not only performances that sound similar to the input, but also accompaniments of melodies that follow a given style, as shown through both quantitative and qualitative experiments as well as a user listening study. In particular, we demonstrate that our model is capable of adapting to a particular musical style even in the case where we have one single input performance." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 DATA REPRESENTATION FOR MUSIC GENERATION", "text": "The MAESTRO (Hawthorne et al., 2019) dataset consists of over 1,100 classical piano performances, where each piece is represented as a MIDI file. The internal performance dataset consists of over 10,000 hours of piano performances transcribed from audio (Anonymous for review). In both cases, we represent music as a sequence of discrete tokens, effectively formulating the generation task as a language modeling problem. The performances are encoded using the vocabulary as described in (Oore et al., 2018), which captures expressive dynamics and timing. This performance encoding vocabulary consists of 128 note on events, 128 note off events, 100 time shift events representing time shifts in 10ms increments from 10ms to 1s, and 32 quantized velocity bins representing the velocity at which the 128 note on events were played." }, { "heading": "2.2 MUSIC TRANSFORMER", "text": "We build our Transformer autoencoder from Music Transformer, a state-of-the-art generative model that is capable of generating music with long-term coherence (Huang et al., 2019b). While the original Transformer uses a self-attention mechanism that operates over absolute positional encodings of each token in a given sequence (Vaswani et al., 2017), Music Transformer replaces this with relative attention (Shaw et al., 2018), which allows the model to keep better track of regularity based on event orderings and periodicity in the performance. Huang et al. (2019b) propose a novel algorithm for implementing relative self-attention that is significantly more memory-efficient, enabling the model to generate musical sequences over a minute in length. For more details regarding the self-attention mechanism and Transformers, we refer the reader to (Vaswani et al., 2017; Parmar et al., 2018)." }, { "heading": "3 CONDITIONAL GENERATION WITH THE TRANSFORMER AUTOENCODER", "text": "" }, { "heading": "3.1 MODEL ARCHITECTURE", "text": "We leverage the standard encoder and decoder stacks of the Transformer as a foundation for our model, with minor modifications that we outline below.\nTransformer Encoder: For both the performance and melody encoder networks, we use the Transformer’s stack of 6 layers which are each comprised of a: (1) multi-head relative attention mechanism; and a (2) position-wise fully-connected feed-forward network. The performance encoder takes as input the event-based performance encoding of an input performance, while the melody encoder learns an encoding of the melody which has been extracted from the input performance. Depending on the music generation task, which we elaborate upon in Section 3.2, the encoder output(s) are fed into the Transformer decoder. Figure 1 describes the way in which the encoder and decoder networks are composed together.\nTransformer Decoder: The decoder shares the same structure as the encoder network, but with an additional multi-head attention layer over the encoder outputs. At each step of generation, the decoder takes in the output of the encoder, as well as each new token that was previously generated.\nThe model is trained end-to-end with maximum likelihood. That is, for a given sequence x of length n, we maximize log pθ(x) = ∑n i=1 log pθ(xi|x<i) with respect to the model parameters θ." }, { "heading": "3.2 CONDITIONING MECHANISM", "text": "Performance Conditioning and Bottleneck For this task, we aim to generate samples that sound “similar” to a conditioning input performance. We incorporate a bottleneck in the output of the Transformer encoder in order to prevent the model from simply memorizing the input (Baldi, 2012). Thus, as shown in Figure 1, we mean-aggregate the performance embedding across the time dimension in order to learn a global representation of style. This mean-performance embedding is then fed into the autoregressive decoder, where the decoder attends to this global representation in order to predict the appropriate target. Although this bottleneck may be undesirable in sequence transduction tasks where the input and output sequences differ (e.g. translation), we find that it works well in our setting where we require the generated samples to be similar in style to the input sequence.\nMelody & Performance Conditioning: Next, we synthesize any given melody in the style of a different performance. Although the setup shares similarities to that of the melody conditioning problem in (Huang et al., 2019b), we note that we also provide a conditioning performance signal, which makes the generation task more challenging. During training, we follow an internal procedure to extract melodies from performances in the training set, quantize the melody to a 100ms grid, and encode it as a sequence of tokens that uses a different vocabulary than the performance representation. We then use two distinct Transformer encoders (each with the same architecture) as in Section 3.1 to separately encode the melody and performance inputs. The melody and performance embeddings are combined to use as input to the decoder.\nWe explore various ways of combining the intermediate representations: (1) sum, where we add the performance and melody embeddings together; (2) concatenate, where we concatenate the two embeddings separated with a stop token; and (3) tile, where we tile the performance embedding across every dimension of time in the melody encoding. In all three cases, we work with the meanaggregated representation of the input performance. We find that different approaches work better than others on some dataets, a point which we elaborate upon in Section 5.\nInput Perturbation In order to encourage the encoded performance representations to generalize across various melodies, keys, and tempos, we draw inspiration from the denoising autoencoder (Vincent et al., 2008) as a means to regularize the model. For every target performance from which we extract the input melody, we provide the model with a perturbed version of the input performance as the conditioning signal. We allow this “noisy” performance to vary across two axes of variation: (1) pitch, where we artificially shift the overall pitch either down or up by 6 semitones; and (2) time, where we stretch the timing of the performance by at most 5%. In our experiments, we find that this augmentation procedure leads to samples that sound more pleasing (Oore et al., 2018). We provide further details on the augmentation procedure in Appendix A." }, { "heading": "4 SIMILARITY EVALUATION ON PERFORMANCE FEATURES", "text": "Although a variety of different metrics have been proposed to quantify both the quality (Engel et al., 2019) and similarity of musical performances relative to one another (Yang & Lerch, 2018; Hung et al., 2019), the development of a proper metric to measure such characteristics in music generation remains an open question. Therefore, we draw inspiration from (Yang & Lerch, 2018) to capture the style of a given performance based its the pitch- and rhythm-related features using 8 features:\n1. Note Density (ND): The note density refers to the average number of notes per second in a performance: a higher note density often indicates a fast-moving piece, while a lower note density correlates with softer, slower pieces. This feature is a good indicator for rhythm.\n2. Pitch Range (PR): The pitch range denotes the difference between the highest and lowest semitones (MIDI pitches) in a given phrase.\n3. Mean Pitch (MP) / Variation of Pitch (VP): Similar in vein to the pitch range (PR), the average and overall variation of pitch in a musical performance captures whether the piece is played in a higher or lower octave.\n4. Mean Velocity (MV) / Variation of Velocity (VV): The velocity of each note indicates how hard a key is pressed in a musical performance, and serves as a heuristic for overall volume.\n5. Mean Duration (MD) / Variation of Duration (VD): The duration describes for how long each note is pressed in a performance, representing articulation, dynamics, and phrasing." }, { "heading": "4.1 OVERLAPPING AREA (OA) METRIC", "text": "To best capture the salient features within the periodic structure of a musical performance, we used a sliding window of 2s to construct histograms of the desired feature within each window. We found that representing each performance with such relative measurements better preserved changing dynamics and stylistic motifs across the entire performance as opposed to a single scalar value (e.g. average note density across the entire performance).\nSimilar to (Yang & Lerch, 2018; Hung et al., 2019), we smoothed the histograms obtained by fitting a Gaussian distribution to each feature – this allowed us to learn a compact representation while still capturing the feature’s variability through its mean µ and variance σ2. Then to compare two performances, we computed the Overlapping Area (OA) between the Gaussian pdfs of each feature to quantify their similarity. We demonstrate empirically that this metric identifies the relevant characteristics of interest in our generated performances in Section 5." }, { "heading": "5 EXPERIMENTS", "text": "Datasets We used both MAESTRO (Hawthorne et al., 2019) and internal datasets (Simon et al., 2019) for the experimental setup. We used the standard 80/10/10 train/validation/test split from MAESTRO v1.0.0, and augmented the dataset by 10x using pitch shifts of no more than a minor third and time stretches of at most 5%. We note that this augmentation is distinct from the noiseinjection procedure referenced in Section 3: the data augmentation merely increases the size of the initial dataset, while the perturbation procedure operates only on the input performance signal. The internal dataset did not require any additional augmentation.\nExperimental Setup We implemented the model in the Tensor2Tensor framework (Vaswani et al., 2017), and used the default hyperparameters for training: 0.2 learning rate with 8000 warmup steps, rsqrt decay, 0.2 dropout, and early stopping for GPU training. For TPU training, we use AdaFactor with the rsqrt decay and learning rate warmup steps to be 10K. We adopt many of the hyperparameter configurations from (Huang et al., 2019b), where we reduce the query and key hidden size to half the hidden size, use 8 hidden layers, use 384 hidden units, and set the maximum relative distance to consider to half the training sequence length for relative global attention. We set the maximum sequence length to be 2048 tokens, and a filter size of 1024. We provide additional details on the model architectures and hyperparameter configurations in Appendix C." }, { "heading": "5.1 LOG-LIKELIHOOD EVALUATION", "text": "As expected, we find that the Transformer autoencoder framework with the encoder output bottleneck outperforms other baselines. In Tables 1 and 2, we see that all conditional model variants outperform their unconditional counterparts. Interestingly, we find that for the melody & performance model, different methods of combining the embeddings work better for different datasets. For example, concatenate led to the lowest NLL for the internal dataset, while sum outperformed all other variants for MAESTRO. We report NLL values for both MAESTRO and the internal dataset for the perturbed-input model variants in Appendix B." }, { "heading": "5.2 SIMILARITY EVALUATION", "text": "We use the OA metric from Section 4 to evaluate whether using a conditioning signal in both the (a) performance autoencoder and (b) melody & performance autoencoder produces samples that are more similar in style to the conditioning inputs from the evaluation set relative to other baselines.\nFirst, we sample 500 examples from the evaluation set and use them as conditioning signals to generate one sample for each input. Then, we compare each conditioning signal to: (1) the generated sample and (2) an unconditional sample. We compute the similarity metric as defined in Section 4 pairwise and take the average over 500 examples. As shown in Table 3, we find that the performance autoencoder generates samples that have 48% higher similarity overall to the conditioning input as compared to the unconditional baseline.\nFor the melody & performance autoencoder, we sample 717*2 distinct performances – we reserve one set of 717 for conditioning performance styles, and the other set of 717 we use to extract melodies in order to synthesize in the style of a different performance. We compare the melody & performance autoencoder to 3 different baselines: (1) one that is conditioned only on the melody (Melody-only); (2) conditioned only on performance (Performance-only); and (3) an unconditional language model. Interestingly, we find that the OA metric is more sensitive to the performance style than melodic similarity. Table 4 demonstrates that the Melody-only autoencoder suffers without the performance conditioning, while the Performance-only model performs best. The Melody & performance autoencoder performs comparably to the best model." }, { "heading": "5.3 INTERPOLATIONS", "text": "" }, { "heading": "5.3.1 PERFORMANCE AUTOENCODER", "text": "In this experiment, we test whether the performance autoencoder can successfully interpolate between different performances. First, we sample 1000 performances from the internal test set (100 for MAESTRO, due to its smaller size), and split this dataset in half. The first half we reserve for\nthe original starting performance, which we call “performance A”, and the other half we reserve for the end performance, denoted as “performance B.” Then we use the performance encoder to encode performance A into its compressed representation zA, and do the same for performance B to obtain zB . For a range α ∈ [0, 0.125, . . . , 0.875, 1.0], we sample a new performance perfnew that results from decoding α ·zA+(1−α) ·zB . We observe how the OA (averaged across all features) defined in Section 4 changes between this newly interpolated performance perfnew and performances {A, B}.\nSpecifically, we compute the similarity metric between each input performance A and interpolated sample perfnew for all 500 samples, and compute the same pairwise similarity for each performance B. We then compute the normalized distance between each interpolated sample and the corresponding performance A or B, which we denote as: rel distance(perf A) = 1 − OA AOA A + OA B , where the OA is averaged across all features. We average this distance across all elements in the set and find in Figure 2 that the relative distance between performance A slowly increases as we increase α from 0 to 1, as expected. We note that it is not possible to conduct this interpolation study with non-aggregated baselines, as we cannot interpolate across variable-length embeddings. We find that a similar trend holds for MAESTRO as in Figure 3(a)." }, { "heading": "5.3.2 MELODY & PERFORMANCE AUTOENCODER", "text": "We conduct a similar study as above with the melody & performance autoencoder. We hold out 716 unique melody-performance pairs (melody is not derived from the same performance) from the internal evaluation dataset and 50 examples from MAESTRO. We then interpolate across the different performances, while keeping the conditioning melody input the same across the interpolations.\nAs shown in Figure 3(a), we find that a similar trend holds as in the performance autoencoder: the newly-interpolated samples show that the relative distance between performance A increases as we increase the corresponding value of α. We note that the interpolation effect is slightly lower than that of the previous section, particularly because the interpolated sample is also dependent on the\nmelody that it is conditioned on. Interestingly, in Figure 3(b), we note that the relative distance between the input performance from which we derived the original melody remains fairly constant across the interpolation procedure. This suggests that we are able to factorize out the two sources of variation and that varying the axis of the input performance keeps the variation in melody constant." }, { "heading": "5.4 HUMAN EVALUATION", "text": "We also conducted listening studies to evaluate the perceived effect of performance and melody conditioning on the generated output. Using models trained on the internal dataset, we conducted two studies: one for performance conditioning, and one for melody and performance conditioning.\n0 50 100 150 200 250 Number of wins\nUnconditioned\nConditioned\nGround Truth\nPerformance Conditioning Study\n0 50 100 150 200 250 Number of wins\nUnconditioned\nPerformance only\nMelody only\nMelody & Performance\nMelody & Performance Conditioning Study\nFigure 4: Results of our listening studies, showing the number of times each source won in a pairwise comparison. Black error bars indicate estimated standard deviation of means.\nFor performance conditioning, we presented participants with a 20s performance clip from the evaluation dataset that we used as a conditioning signal. We then asked them to listen to two additional 20s performance clips and use a Likert scale to rate which one sounded most similar in style to the conditioning signal. The sources the participants rated included “Ground Truth” (a different snippet of the same sample used for the conditioning signal), “Conditioned” (output of the Performance Autoencoder), and “Unconditioned” (output of unconditional model). For this study, 492 ratings were collected, with each source involved in 328 pair-wise comparisons.\nFor melody and performance conditioning, we presented participants with a 20s performance clip from the evaluation dataset and a 20s melody from a different piece in the evaluation dataset that we used as our conditioning signals. We then asked them to listen to two additional 20s performance clips and use a Likert scale to rate which sounded most like the conditioning melody played in the style of the conditioning performance. The sources the participants rated included “Melody & Performance” (output of the Melody-Performance Autoencoder), “Melody only” (output of a model conditioned only on the melody signal), “Performance only” (output of a model conditioned only on the performance signal), and “Unconditioned” (output of an unconditional model). For this study, 714 ratings were collected, with each source involved in 357 pair-wise comparisons.\nFigure 4 shows the number of comparisons in which each source was selected as being most similar in style to the conditioning signal. A Kruskal-Wallis H test of the ratings showed that there is at least\none statistically significant difference between the models: χ2(2) = 332.09, p < 0.05 (7.72e−73) for melody conditioning and χ2(2) = 277.74, p < 0.05 (6.53e−60) for melody and performance conditioning. A post-hoc analysis using the Wilcoxon signed-rank test with Bonferroni correction showed that there were statistically significant differences between all pairs of the performance study with p < 0.05/3 and all pairs of the performance and melody study with p < 0.05/6 except between the “Melody only” and “Melody & Performance” models (p = 0.0894).\nThese results demonstrate that the performance conditioning signal has a clear effect on the generated output. In fact, the effect was sufficiently robust that in the 164 comparisons between “Ground Truth” and “Conditioned”, participants said they had a preference for “Conditioned” 58 times.\nAlthough the results between “Melody-only” and “Melody & Performance” are close, this study demonstrates that conditioning with both melody and performance outperforms conditioning on performance alone, and they are competitive with melody-only conditioning, despite the model having to deal with the complexity of incorporating both conditioning signals. In fact, we find quantitative evidence that human evaluation is more sensitive to melodic similarity, as the “Performance-only” model performs worst – a slight contrast to the results from the OA metric in Section 5.2.\nWe provide several audio examples demonstrating the effectiveness of these conditioning signals in the online supplement at http://bit.ly/2l14pYg. Our qualitative findings from the audio examples and interpolations, coupled with the quantitative results from the similarity metric and the listening test which capture different aspects of the synthesized performance, support the finding that the Melody & Performance autoencoder offers significant control over the generated samples." }, { "heading": "6 RELATED WORK", "text": "Sequential autoencoders: Building on the wealth of autoencoding literature (Hinton & Salakhutdinov, 2006; Salakhutdinov & Hinton, 2009; Vincent et al., 2010), our work bridges the gap between the traditional sequence-to-sequence framework (Sutskever et al., 2014), their recent advances with various attention mechanisms (Vaswani et al., 2017; Shaw et al., 2018; Huang et al., 2019b), and sequential autoencoders. Though (Wang & Wan, 2019) propose a Transformer-based conditional VAE for story generation, the self-attention mechanism is shared between the encoder and decoder. Most similar to our work is that of (Kaiser & Bengio, 2018), which uses a Transformer decoder and a discrete autoencoding function to map an input sequence into a discretized, compressed representation. We note that this approach is complementary to ours, where a similar idea of discretization may be applied to the output of our Transformer encoder. The MusicVAE (Roberts et al., 2018) is a sequential VAE with a hierarchical recurrent decoder, which learns an interpretable latent code for musical sequences that can be used during generation time. This work builds upon (Bowman et al., 2015) that uses recurrence and an autoregressive decoder for text generation. Our Transformer autoencoder can be seen as a deterministic variant of the MusicVAE, with a complex self-attention mechanism based on relative positioning in both the encoder and decoder to capture more expressive features of the data at both the local and global scale.\nControllable generations using representation learning: There is also a wide range of recent work on controllable generations, where we focus on the music domain. (Engel et al., 2017a) proposes to constrain the latent space of unconditional generative models to sample with respect to some predefined attributes, whereas we explicitly define our conditioning signal in the data space and learn a global representation of its style during training. The Universal Music Translation network aims to translate music across various styles, but is not directly comparable to our approach as they work with raw audio waveforms (Mor et al., 2018). Both (Meade et al., 2019) and MuseNet (Payne, 2019) generate music based on user preferences, but adopt a slightly different approach: the models are specifically trained with labeled tokens (e.g., composer and instrumentation) as conditioning input, while our Transformer autoencoder’s global style representation is learned in an unsupervised way." }, { "heading": "7 CONCLUSION", "text": "We proposed our Transformer autoencoder for conditional music generation, a sequential autoencoder model which utilizes an autoregressive Transformer encoder and decoder for improved modeling of musical sequences with long-term structure. We show that this model allows users to easily\nadapt the outputs of their generative model using even a single input performance. Through experiments on the MAESTRO and internal datasets, we demonstrate both quantitatively and qualitatively that our model generates samples that sound similar in style to a variety of conditioning signals relative to baselines. For future work, it would be interesting to explore other training procedures such as variational techniques or few-shot learning approaches to account for situations in which the input signals are from slightly different data distributions than the training set." }, { "heading": "A FURTHER DETAILS ON INPUT PERTURBATION PROCEDURE", "text": "We elaborate upon the perturbation procedure as follows, which only applies to the melody & performance conditioning music generation task. First, we take a target performance that we would like our model to predict and extract the corresponding melody from this performance. We use this “clean” melody as part of our conditioning signal. Then, we modify the conditioning performance by either shifting the pitch up or down 6 semitones and stretching the timing by ± 5%. Then for each new data point during training, a single noise injection procedure is randomly sampled from the cross product of all possible combinations of 12 pitch shift values and 4 time stretch values (evaluated in intervals of 2.5%). At test time, the data points are left unperturbed." }, { "heading": "B NLL EVALUATION FOR ”NOISY” MODEL", "text": "Below, we provide the note-wise test NLL on the MAESTRO and internal datasets with melody conditioning, where the conditioning performance is perturbed by the procedure outlined in Section 3." }, { "heading": "C ADDITIONAL DETAILS ON MODEL TRAINING PROCEDURE", "text": "We emphasize that the Transformer is trained in an autoencoder-like fashion. Specifically, for performance-only conditioning, the Transforemr decoder is tasked with predicting the same performance that was fed as input to the encoder. In this way, we encourage the model to learn global representations (the mean-aggregated performance embedding from the encoder) that will faithfully be able to reconstruct the input performance. For melody performance conditioning, the Transformer autoencoder is trained to predict a new performance using the combined melody+performance embedding, where the loss is computed with respect to the conditioned input performance that is provided to the encoder." }, { "heading": "D MODEL ARCHITECTURE AND HYPERPARAMETER CONFIGURATIONS", "text": "We mostly use the default Transformer architecture as provided in the Tensor2Tensor framework, such as 8 self-attention heads as listed in the main text, and list the slight adjustments we made for each dataset below:\nD.1 MAESTRO\nFor the MAESTRO dataset, we follow the hyperparameter setup of (Huang et al., 2019b):\n1. num hidden layers = 6 2. hidden units = 384 3. filter size = 1024 4. maximum sequence length = 2048 5. maximum relative distance = half the hidden size 6. dropout = 0.1\nD.2 INTERNAL DATASET\nFor the internal dataset, we modify the number of hidden layers to 8 and slightly increase the level of dropout.\n1. num hidden layers = 8 2. hidden units = 384 3. filter size = 1024 4. maximum sequence length = 2048 5. maximum relative distance = half the hidden size 6. dropout = 0.15\nE INTERPOLATIONS DETAILS\nInterpolation relative distance results for the (a) performance and (b) melody & performance Transformer autoencoders for the MAESTRO dataset.\n(a) Relative distance from interpolated sample to the original starting performance. (b) Relative distance from the interpolated sample to the original melody, which is kept fixed.\nFigure 5: The distance to the original performance increases as the value of α increases in (a), as expected. In (b), we see that there is a very slight increase in the relative distance to the original melody during the interpolation procedure.\nF INTERNAL DATASET PERFORMANCE INTERPOLATIONS\nHere, we provide piano rolls demonstrating the effects of latent-space interpolation for the internal dataset, for both the (a) performance and (b) melody & performance Transformer autoencoder. For similar results in MAESTRO as well as additional listening samples, we refer the reader to the online supplement: http://bit.ly/2l14pYg.\nG INTERNAL DATASET MELODY INTERPOLATION" } ]
2,019
null
SP:e472738b53eec7967504021365ac5b4808028ec1
[ "This paper introduces a corpus-based approach to build sentiment lexicon for Amharic. In order to save time and costs for the resource-limited language, the lexicon is generated from an Amharic news corpus by the following steps: manually preparing polarized seed words lists (strongly positive and strongly negative), calculating the co-occurrence of target word in its context via Positive Point-wise Mutual Information (PPMI) method, measuring the similarity between target words and seed words by cosine distance, iterating with the threshold 100 and 200. The PPMI lexicon is stemmed and evaluated from aspects of subjectivity detection, coverage, agreement and sentiment classification. Three other lexicons: Manual developed by manual, SOCAL and SWN developed by bilingual dictionary, are used as benchmark to compare with the PPMI lexicon. In sentiment classification experiment the PPMI lexicon did not show a superior performance. All the four lexicons have similar accuracy, between 42.16% ~ 48.87%. Only when the four are combined together the result is improved to 83.51%. ", "This paper proposes a domain-specific corpus-based approach for generating semantic lexicons for the low-resource Amharic language. Manual construction of lexicons is especially hard and expensive for low-resource languages. More importantly, the paper points out that existing dictionaries and lexicons do not capture cultural connotations and language specific features, which is rather important for tasks like sentiment classification. Instead, this work proposes to automatically generate a semantic lexicon using distributional semantics from a corpus." ]
Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews. To employ rule based sentiment classification, we require sentiment lexicons. However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages. To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach. The intention of this approach is to handle sentiment terms specific toAmharic language fromAmharic Corpus. Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb. We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus. Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding (i.e. Positive Point-wise Mutual Information PPMI). Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds. Finally, the lexicon generated in corpus based approach is evaluated. keywords: Amharic Sentiment lexicon , Amharic Sentiment Classification , Seed words
[]
[ { "authors": [ "D Alessia", "Fernando Ferri", "Patrizia Grifoni", "Tiziana Guzzo" ], "title": "Approaches, tools and applications for sentiment analysis implementation", "venue": "International Journal of Computer Applications,", "year": 2015 }, { "authors": [ "S. Gebremeskel" ], "title": "Sentiment mining model for opinionated amharic texts", "venue": "Unpublished Masters Thesis and Department of Computer Science and Addis Ababa University and Addis Ababa,", "year": 2010 }, { "authors": [ "William L Hamilton", "Kevin Clark", "Jure Leskovec", "Dan Jurafsky" ], "title": "Inducing domainspecific sentiment lexicons from unlabeled corpora", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Girma Neshir Alemneh", "Andreas Rauber", "Solomon Atnafu" ], "title": "Dictionary Based Amharic", "venue": "Sentiment Lexicon Generation,", "year": 2019 }, { "authors": [ "Lucia Passaro", "Laura Pollacci", "Alessandro Lenci" ], "title": "Item: A vector space model to bootstrap an italian emotive lexicon", "venue": "In Second Italian Conference on Computational Linguistics CLiC-it", "year": 2015 }, { "authors": [ "Chris Potts" ], "title": "Distributional approaches to word meanings", "venue": "Ling 236/Psych 236c: Representations of meaning,", "year": 2013 }, { "authors": [ "Peter D Turney", "Michael L Littman" ], "title": "Measuring praise and criticism: Inference of semantic orientation from association", "venue": "ACM Transactions on Information Systems (TOIS),", "year": 2003 }, { "authors": [ "Peter D Turney", "Patrick Pantel" ], "title": "From frequency to meaning: Vector space models of semantics", "venue": "Journal of artificial intelligence research,", "year": 2010 }, { "authors": [ "Leonid Velikovich", "Sasha Blair-Goldensohn", "Kerry Hannan", "Ryan McDonald" ], "title": "The viability of web-derived polarity lexicons", "venue": "In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics,", "year": 2010 }, { "authors": [ "Daniel Waegel" ], "title": "A survey of bootstrapping techniques in natural language processing", "venue": null, "year": 2003 }, { "authors": [ "Baye Yimam" ], "title": "የአማርኛ-ሰዋሰዉ)yäamarIña säwasäw. Educational Materials Production and Distribution Enterprise(EMPDE)", "venue": null, "year": 2000 } ]
[ { "heading": null, "text": "keywords: Amharic Sentiment lexicon , Amharic Sentiment Classification , Seed words" }, { "heading": "1 INTRODUCTION", "text": "Most of sentiment mining research papers are associated to English languages. Linguistic computational resources in languages other than English are limited. Amharic is one of resource limited languages. Due to the advancement of World Wide Web, Amharic opinionated texts is increasing in size. To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence, government intelligence, market intelligence, or support decision making. For carrying out Amharic sentiment classification, the availability of sentiment lexicons is crucial. To-date, there are two generated Amharic sentiment lexicons. These are manually generated lexicon(1000) (Gebremeskel, 2010) and dictionary based Amharic SWN and SOCAL lexicons (Neshir Alemneh et al., 2019). However, dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language. For example, Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons. The word ጉርሻ/\"feed in other people with hands which expresses love and live in harmony with others\"/ in the Amharic text: \"እንደ ጉርሻ ግን የሚያግባባን የለም. . . ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ።\" has positive connotation or positive sentiment. But the dictionary meaning of the word ጉርሻ is \"bonus\". This is far away from the cultural connotation that it is intended to represent and express. We assumed that such kind of culture (or language specific) words are found in a collection of Amharic texts. However, dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic. Thus, this work builds corpus based algorithm to handle language and culture specific words in the lexicons. However, it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic. But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available. Due to this reason, the lexicon built using this approach is usually used for lexicon based\nsentiment analysis in the same domain from which it is built. The research questions to be addressed utilizing this approach are: (1) How can we build an approach to generate Amharic Sentiment Lexicon from corpus? (2)How do we evaluate the validity and quality of the generated lexicon? In this work, we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly. The corpora are collected from different local news media organizations and also from facebook news' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon." }, { "heading": "2 RELATED WORKS", "text": "In this part, we will present the key papers addressing corpus- based Sentiment Lexicon generation. In (Velikovich et al., 2010), large polarity lexicon is developed semiautomatically from the web by applying graph propagation method. A set of positive and negative sentences are prepared from the web for providing clue to expansion of lexicon. The method assigns a higher positive value if a given seed phrase contains multiple positive seed words, otherwise it is assigned negative value. The polarity p of seed phrase i is given by: pi = p + i − βp − i , where β is the factor that is responsible for preserving the overall semantic orientations between positive and negative flow over the graph. Both quantitatively and qualitatively, the performance of the web generated lexicon is outperforming the other lexicons generated from other manually annotate lexical resources like WordNet. The authors in (Hamilton et al., 2016) developed two domain specific sentiment lexicons (historical and online community specific) from historical corpus of 150 years and online community data using word embedding with label propagation algorithm to expand small list of seed terms. It achieves competitive performance with approaches relying on hand curated lexicons. This revealed that there is sentiment change of words either positively to negatively or vice-versa through time. Lexical graph is constructed using PPMI matrix computed from word embedding. To fill the edges of two nodes (wi, wj), cosine similarity is computed. To propagate sentiment from seeds in lexical graph, random walk algorithm is adapted. That says, the polarity score of a seed set is proportional to probability of random walk from the seed set hitting that word. The generated lexicon from domain specific embedding outperforms very well when compared with the baseline and other variants. Our work is closely associated to the work of Passaro et al. (2015). Passaro et al. (2015) generated emotion based lexicon by bootstrapping corpus using word distributional semantics (i.e. using PPMI). Our approach is different from their work in that we generated sentiment lexicon rather than emotion lexicon. The other thing is that the approach of propagating sentiment to expand the seeds is also different. We used cosine similarity of the mean vector of seed words to the corresponding word vectors in the vocabulary of the PPMI matrix. Besides, the threshold selection, the seed words part of speech are different from language to language. For example, Amharic has few adverb classes unlike Italian. Thus, our seed words do not contain adverbs." }, { "heading": "3 PROPOSED CORPUS BASED APPROACHES", "text": "There are variety of corpus based strategies that include count based(e.g.PPMI) and predictive based(e.g. word embedding) approaches. In this part, we present the proposed count based approach to generate Amharic Sentiment lexicon from a corpus. In Figure 1, we present the proposed framework of corpus based approach to generate Amharic Sentiment lexicon. The framework has four components: (Amharic News) Corpus Collections, Preprocessing Module, PPMI Matix of Word-Context, Algorithm to generate (Amharic) Sentiment Lexicon resulting in the Generated (Amharic) Sentiment Lexicon.\nThe algorithm and the seeds in figure 1 are briefly described as follows. To generate Amharic Sentiment lexicon, we follow four major steps:\n1. Prepare a set of seed lists which are strongly negatively and positively polarized Adjectives, Nouns and Verbs (Note: Amharic language contains few adverbs (Passaro et al., 2015), adverbs are not taken as seed word). We will select at least seven most polarized seed words for each of aforementioned part-of-speech classes (Yimam, 2000). Selection of seed words is the most critical that affects the performance of bootstrapping algorithm (Waegel, 2003). Most authors choose the most frequently occurring words in the corpus as seed list. This is assumed to ensure the greatest amount of contextual information to learn from, however, we are not sure about the quality of the contexts. We adapt and follow seed selection guidelines of Turney & Littman (2003). After we tried seed selection based on Turney & Littman (2003), we update the original seed words. Sample summary of seeds are presented in Table 1.\n2. Build semantic space word-context matrix (Potts, 2013; Turney & Pantel, 2010) using the number of occurrences(frequency) of target word with its context words of window size±2. Word-Context matrix is selected as it is dense and good for building word rich representations (i.e. similarity of words) unlike word-document design which is sparse and computationally expensive (Potts, 2013; Turney & Pantel, 2010). Initially, let F be word-context raw frequency matrix with nr rows and nc columns formed from Amharic text corpora. Next, we apply weighting functions to select word semantic similarity descriminant features. There are variety of weighting functions to get meaningful semantic similarity between a word and its context. The most popular one is Point-wise Mutual Information (PMI) (Turney & Littman, 2003). In our case we use positive PMI by assigning 0 if it is less than 0 (Bullinaria & Levy, 2007). Then, let X be new PMI based matrix that will be obtained by applying Positive PMI(PPMI) to matrix F. Matrix X will have the same number of rows and columns matrix F. The value of an element fij is the number of times that word wi occurs in the context cj in matrix F. Then, the corresponding element xij in the new matrix X would be defined as follows:\nxij =\n{ PMI(wi, cj) if PMI(wi, cj) > 0\n0 if PMI(wi, cj) ≤ 0 (1)\nWhere, PMI(wi, cj) is the Point-wise Mutual Information that measures the estimated co-occurrence of word wi and its context cj and is given as:\nPMI(wi, cj) = log P (wi, cj)\nP (wi)P (cj) (2)\nWhere, P (wi, cj) is the estimated probability that the word wi occurs in the context of cj , P (wi) is the estimated probability of wi and P (cj) is the estimated probability of ci are defined in terms of frequency fij.\n3. Compute the cosine distance between target term and centroid of seed lists\n(e.g. centroid for positive adjective seeds, −−→ µ+adj ). To find the cosine distance of a new word from seed list, first we compute the centroids of seed lists of respective POS classes; for example, centroids for positive seeds S+ and negative seeds S-, for adjective class is given by:\n−−→ µ+adj(S +) = Σw∈S+ ~w |S+| & −−→ µ−adj(S −) = Σw∈S− ~w |S−| (3)\nSimilarly, centroids of the other seed classes will be found. Then, the cosine distances of target word from positive and negative adjective seeds of centroids, −−→ µ+adj and −−→ µ−adj is given by:\ncosine( ~wi, −−→ µ+adj) =\n~wi. −−→ µ+adj\n|| ~wi|||| −−→ µ+adj ||\n& cosine( ~wi, −−→ µ−adj) =\n~wi. −−→ µ−adj\n|| ~wi|||| −−→ µ−adj ||\n(4)\nAs word-context matrix x is vector space model, the cosine of the angle between two words vectors is the same as the inner product of the normalized unit word vectors. After we have cosine distances between word wi and seed with centroid ~wi, ~µ+adj , the similarity measure can be found using either:\nSim( ~wi, −−→ µ+adj) =\n1\ncosine( ~wi, −−→ µ+adj)\nor Sim( ~wi, −−→ µ+adj) = 1− cosine( ~wi, −−→ µ+adj) (5)\nSimilarly, the similarity score, Sim( ~wi, −−→ µ−adj) can also be computed. This similarity score for each target word is mapped and scaled to appropriate real number. A target word whose sentiment score is below or above a particular threshold can be added to that sentiment dictionary in ranked order based on PMI based cosine distances. We choose positive PMI with cosine measure as it is performed consistently better than the other combination features with similarity metrics: Hellinger, Kullback-Leibler , City Block, Bhattacharya and Euclidean (Bullinaria & Levy, 2007).\n4. Repeat from step 3 for the next target term in the matrix to expand lexicon dictionary. Stop after a number of iterations defined by a threshold acquired experimental testing. The detail algorithm for generating Amharic sentiment lexicon from PPMI is presented in algorithm 1\nAlgorithm description: The algorithm 1 reads the seed words and generates the merge of expanded seed words using PPMI. Line 1 loads the seed words and assigns to their corresponding category of seed words. Similarly, from line 2 to 6 loads the necessary lexical resources such as PPMI matrix, vocabulary list, Amharic-English, AmharicAmharic, Amharic-Sentiment SWN and in line 7, the output Amharic Sentiment Lex. by PPMI is initialized to Null. From line 8 to 22, it iterates for each seed words polarity and categories. That is, line 9 to 11 checks that each seed term is found in the corpus vocabulary. Line 12 initializes the threshold by a selected trial number(in our case 100,200,1000, etc.). From line 13 to 22, iterates from i=0 to threshold in order to perform a set of operations. That is, line 16 computes the mean of the seed lexicon based on equation 3 specified in the previous section. Line 17 computes the similarity between the mean vector and the PPMI word-word co occurrence matrix and returns the top i most closest terms to the mean vector based on equation 5. Lines 18-19, it removes top closest items which has different part-of-speech to the seed words. Line 20-21 check the top i closest terms are has different polarity to the seed lexicon. Line 22 updates the PPMI lexicon by inserting the newly obtained sentiment terms. Line 23 returns the generated Amharic Sentiment lexicon by PPMI. Using corpus based approach, Amharic sentiment lexicon are built where it allows\nInput: PPMI : Word-Context PPMI matrix Output: AM_Lexicon_by_PPMI: Generated Amharic Sentiment Lexicon\n1 seed_noun+, seed_noun−, seed_adj+, seed_adj−, seed_verb+, seed_verb− ← LoadCorespondingSeedCatagoryfile PPMI ← LoadPPMIMatrixfile V ocab ← LoadV ocabularyfile AmharicEnglishDic ← LoadAmharicEnglishDictionaryfile AmharicAmharicDic ← LoadAmharicAmharicDictionaryfile AmharicSWN ← LoadAmharicSentimentSWNfile Amharic_Sentiment_Lex_by_PPMI ← Null foreach seed_lexicon ∈ seed_noun+, seed_noun−, seed_adj+, seed_adj−, seed_verb+, seed_verb−: do 2 foreach seed ∈ seed_lexicon: do 3 if seed ∈ Vocab then 4 Remove Seed from Seed_Lexicon\n5 Threshold←Number of iterations foreach i← 0 to Threshold: do 6 mean_vector← compute_mean(seed_lexicon) by equation 3\ntop_ten_closest_terms← compute_similarity(mean_vector,PPMI) by equation 4 if term ∈ top_ten_closest_terms and term ∈ seed_lexicon: then\n7 Remove the term from top_ten_closest_terms list as it is duplicate\n8 if Any term in top_ten_closest_terms has different part_speech from seed_lexicon then 9 Remove the term from top_ten_closest_terms list\n10 if Any term in top_ten_closest_terms has different polarity from Amharic SWN lexicon then 11 Remove the term from top_ten_closest_terms list\n12 Update seed_lexicon by inserting top_ten_closest_terms list\n13 AM_Lexicon_by_PPMI←AM_Lexicon_by_PPMI + seed_lexicon;\nalgorithm 1: Amharic Sentiment Lexicon Generation Algorithm Using PPMI\nfinding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach. The quality of this lexicon will be evaluated using similar techniques used in dictionary based approaches (Neshir Alemneh et al., 2019). However, this approach may not probably produce sentiment lexicon with large coverage as the corpus size may be insufficient to include all polarity words in Amharic language. To reduce the level of this issue, we combine the lexicons generated in both dictionary based and corpus based approaches for Amharic Sentiment classification." }, { "heading": "4 RESULTS AND DISCUSSIONS", "text": "Using corpus based approach, Amharic sentiment lexicon is built where it allows finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach. In this section, we have attempted to develop new approaches to bootstrapping relying on word-context semantic space representation of large Amharic corpora." }, { "heading": "4.1 SEED WORDS", "text": "We have manually prepared Amharic opinion words with highest sentimental strength either positively or negatively from three parts- of -speech categories: Adjectives, Nouns and Verbs. We expanded each seed category from which the snapshot seed words are presented in Table 1" }, { "heading": "4.2 STEMMING", "text": "Amharic is highly morphological language both inflectional and derivational morphology is complex. Thus, without applying stemming, it is not easy to do computational\nlinguistic tasks like sentiment classification. Amharic roots are very few thousands from which other words are derived. The level of stemming determines whether stemmed terms preserve the semantics of original word or not. So, in this case, we developed light stemmer to reduce surface words into corresponding stems with minimal lose of semantic information. A sample of stemmed words with thier coresponding surface words are presented in Table 2." }, { "heading": "5 EVALUATION", "text": "" }, { "heading": "5.1 CORPUS AND LEXICAL RESOURCES", "text": "The corpus and data sets used in this research are presented as follows: i. Amharic Corpus: The size of this corpus is 20 milion tokens (teams from Addis Ababa University et al.). This corpus is used to build PPMI matrix and also to evaluate the coverage of PPMI based lexicon. ii. Facebook Users' Comment This data set is used to build PPMI matrix and also to evaluate subjectivity detection, lexicon coverage and lexicon based sentiment classification of the generated Amharic Sentiment lexicon. The data set is manually annotated by Government Office Affairs Communication(GOAC) professional and it is labeled either positive and negative. iii. Amharic Sentiment Lexicons: The Amharic sentiment lexicons includes manual (1000) (Gebremeskel, 2010), Amharic SentiWordNet(SWN)(13679) (Neshir Alemneh et al., 2019) and Amharic Semantic Orientation Calculator(SOCAL) (5683) (Neshir Alemneh et al., 2019). These lexicons are used as benchmark to compare the performance of PPMI based lexicons." }, { "heading": "5.2 EXPERIMENTAL SETTINGS", "text": "Using the aforementioned corpus, Amharic News Corpus with 11433 documents and 2800 Facebook News post and Users Comments are used to build word-context PPMI. First, we tried to clean the data set. After cleansing, we tokenized and word-Context count dictionary is built. Relying on this dictionary and in turn, it is transformed into PPMI Sparse matrices. This matrix is saved in file for further tasks of Amharic Sentiment lexicon generation. The total vocabulary size is 953,406. After stop words are removed and stemming is applied, the vocabulary size is reduced to 231,270. Thus, the size of this matrix is (231270, 231270). Based on this word-word information in this matrix, the point-wise mutual information (PMI) of word-word is computed as in equation 2. The PMI is input to further computation of our corpus based lexicon expansion algorithm 1. Finally, we generated Amharic sentiment lexicons by expanding the seeds relying on the PPMI matrix of the corpus by implementing this algorithm 1 at two threshold iteration values of 100 and 200. With these iterations, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively. We think, iterations >200 is better. Thus, our further discussion on evaluation of the approach is based on the lexicon generated at 200 iteration(i.e.Lexicon size of 3794). This lexicon saved with entries containing stemmed word, part of speech, polarity strength and polarity sign. Sample of this lexicon is presented in Table 2.\nWe will evaluate in three ways: external to lexicon and internal to lexicon. External to lexicon is to test the usefulness and the correctness of each of the lexicon to find\nsentiment score of sentiment labeled Amharic comments corpus. Internal evaluation is compute the degree to which each of the generated lexicons are overlapped( or agreed ) with manual, SOCAL and SWN(Amharic) Sentiment lexicons." }, { "heading": "5.3 SUBJECTIVITY DETECTION", "text": "In this part , we evaluate the accuracy of the subjectivity detection rate of generated PPMI based Amharic lexicon on 2800 facebook annotated comments. The procedures for aggregating sentiment score is done by summing up the positive and negative sentiment values of opinion words found in the comment if those opinion words are also found in the sentiment lexicon. If the sum of positive score greater or less than sum of negative score, then the comment is said to be a subjective comment, otherwise the comment is either neutral or mixed. Based on this technique, the subjectivity detection rate is presented in Table 3.\nDiscussion: As subjectivity detection rate of the PPMI lexicon and others are depicted in Table 3, the detection rate of PPMI lexicon performs better than the baseline(manual lexicon). Where as Lexicon from SWN outperforms the PPMI based Lexicon with 2% accuracy." }, { "heading": "5.4 GENERATED LEXICON’S COVERAGE", "text": "This is to evaluate the coverage of the generated lexicon externally by using the aforementioned Amharic corpus(both facebook comments and general corpus). That is, the coverage of PPMI based Amharic Sentiment Lexicon on facebook comments and a general Amharic corpus is computed by counting the occurrence tokens of the corpus in the generated sentiment lexicons and both positive and negative counts are computed in percent and it presented in Table 4.\nDiscussion: Table 4 depicted that the coverage of PPMI based Amharic sentiment lexicon is better than the manual lexicon and SOCAL. However, it has less coverage than SWN. Unlike SWN, PPMI based lexicon is generated from corpus. Due to this reason its coverage to work on a general domain is limited. It also demonstrated that the positive and negative count in almost all lexicons seems to have balanced and uniform distribution of sentiment polarity terms in the corpus." }, { "heading": "5.5 LEXICONS’AGREEMENT", "text": "In this part, we will evaluate to what extent the generated PPMI based Lexicon agreed or overlapped with the other lexicons. This type of evaluation (internal) which validates by stating the percentage of entries in the lexicons are available in common. The more percentage means the better agreement or overlap of lexicons. The agreement of PPMI based in percentage is presented in Table 5.\nDiscussion: Table 5 presents the extent to which the PPMI based lexicon is agreed with other lexicons.PPMI based lexicon has got the highest agreement rate (overlap) with SWN lexicon than the other lexicons." }, { "heading": "5.6 LEXICON BASED AMHARIC SENTIMENT CLASSIFICATION", "text": "In this part, Table 6 presents the lexicon based sentiment classification performance of generated PPMI based Amharic lexicon on 2821 annotated Amharic Facebook Comments. The classification accuracy of generated PPMI based Lexicon and other lexicons are compared.\nDiscussion: Besides the other evaluations of the generated PPMI based lexicon, the usefulness of this lexicon is tested on actual lexicon based Amharic sentiment classification. As depicted in Table 6 The accuracy of PPMI based lexicon for lexicon based sentiment classification is better than the manual benchmark lexicon. As discussed on dictionary based lexicons (Neshir Alemneh et al., 2019) for lexicon based sentiment classification in earlier section, using stemming and negation handling are far improving the performance lexicon based classification. Besides combination of lexicons outperforms better than the individual lexicon." }, { "heading": "6 CONCLUSIONS", "text": "Creating a sentiment lexicon generation is not an objective process. The generated lexicon is dependent on the task it is applied. Thus, in this work we have seen that it is possible to create Sentiment lexicon for low resourced languages from corpus. This captures the language specific features and connotations related to the culture where the language is spoken. This can not be handled using dictionary based approach that propagates labels from resource rich languages. To the best of our knowledge, the the PPMI based approach to generate Amharic Sentiment lexicon form corpus is performed for first time for Amharic language with almost minimal costs and time. Thus, the generated lexicons can be used in combination with other sentiment lexicons to enhance the performance of sentiment classifications in Amharic language. The approach is a generic approach which can be adapted to other resource limited languages to reduce cost of human annotation and the time it takes to annotated sentiment lexicons. Though the PPMI based Amharic Sentiment lexicon outperforms the manual lexicon, prediction (word embedding) based approach is recommended to generate sentiment lexicon for Amharic language to handle context sensitive terms." } ]
2,019
CORPUS BASED AMHARIC SENTIMENT LEXICON GENERA- TION
SP:77d59e1e726172184249bdfdd81011617dc9c208
[ "The paper proposes a quantum computer-based algorithm for semi-supervised least squared kernel SVM. This work builds upon LS-SVM of Rebentrost et al (2014b) which developed a quantum algorithm for the supervised version of the problem. While the main selling point of quantum LS-SVM is that it scales logarithmically with data size, supervised algorithms shall not fully enjoy logarithmic scaling unless the cost for collecting labeled data is also logarithmic, which is unlikely. Therefore, semi-supervised setting is certainly appealing. Technically, there are two main contributions. The first is the method of providing Laplacian as an input to the quantum computer. The second contribution, which is about the computation of matrix inverse (K + KLK)^{-1}, is a bit more technical, and could be considered as the main contribution of the paper.", "This paper developes a quantum algorithm for kernel-based support vector machine working in a semi-supervised learning setting. The motivation is to utilise the significant advantage of quantum computation to train machine learning models on large-scale datasets efficiently. This paper reviews the existing work on using quantum computing for least-squares svm (via solving quantum linear systems of equations) and then extends it to deal with kernel svm in a semi-supervised setting. " ]
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM.
[]
[ { "authors": [ "Scott Aaronson" ], "title": "Read the fine print", "venue": "Nature Physics,", "year": 2015 }, { "authors": [ "2019. Srinivasan Arunachalam", "Ronald de Wolf" ], "title": "A survey of quantum learning theory", "venue": null, "year": 2019 }, { "authors": [ "Srinivasan Arunachalam", "Vlad Gheorghiu", "Tomas Jochym-O’Connor", "Michele Mosca", "Priyaa Varshinee Srinivasan" ], "title": "On the robustness of bucket brigade quantum RAM", "venue": "New Journal of Physics,", "year": 2015 }, { "authors": [ "Dominic W Berry", "Graeme Ahokas", "Richard Cleve", "Barry C Sanders" ], "title": "Efficient quantum algorithms for simulating sparse Hamiltonians", "venue": "Communications in Mathematical Physics,", "year": 2007 }, { "authors": [ "Vedran Dunjko", "Hans J Briegel" ], "title": "Machine learning & artificial intelligence in the quantum domain: a review of recent progress", "venue": "Reports on Progress in Physics,", "year": 2018 }, { "authors": [ "Glenn M Fung", "Olvi L Mangasarian" ], "title": "Multicategory proximal support vector machine classifiers", "venue": "Machine Learning,", "year": 2005 }, { "authors": [ "Vittorio Giovannetti", "Seth Lloyd", "Lorenzo Maccone" ], "title": "Quantum random access memory", "venue": "Physical Review Letters,", "year": 2008 }, { "authors": [ "Aram W Harrow", "Avinatan Hassidim", "Seth Lloyd" ], "title": "Quantum algorithm for linear systems of equations", "venue": "Physical Review Letters,", "year": 2009 }, { "authors": [ "S Sathiya Keerthi", "Dennis DeCoste" ], "title": "A modified finite newton method for fast solution of large scale linear SVMs", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Shelby Kimmel", "Cedric Yen-Yu Lin", "Guang Hao Low", "Maris Ozols", "Theodore J Yoder" ], "title": "Hamiltonian simulation with optimal sample complexity", "venue": "npj Quantum Information,", "year": 2017 }, { "authors": [ "Tongyang Li", "Shouvanik Chakrabarti", "Xiaodi Wu" ], "title": "Sublinear quantum algorithms for training linear and kernel-based classifiers", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Seth Lloyd", "Masoud Mohseni", "Patrick Rebentrost" ], "title": "Quantum principal component analysis", "venue": "Nature Physics,", "year": 2014 }, { "authors": [ "Stefano Melacci", "Mikhail Belkin" ], "title": "Laplacian support vector machines trained in the primal", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Alejandro Perdomo-Ortiz", "Marcello Benedetti", "John Realpe-Gómez", "Rupak Biswas" ], "title": "Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers", "venue": "Quantum Science and Technology,", "year": 2018 }, { "authors": [ "John Platt" ], "title": "Sequential minimal optimization: A fast algorithm for training support vector machines", "venue": "Technical Report MSR-TR-98-14,", "year": 1998 }, { "authors": [ "Patrick Rebentrost", "Masoud Mohseni", "Seth Lloyd" ], "title": "Quantum support vector machine for big data classification", "venue": "Physical Review Letters,", "year": 2014 }, { "authors": [ "Patrick Rebentrost", "Masoud Mohseni", "Seth Lloyd" ], "title": "Quantum support vector machine for big data classification", "venue": "Physical Review Letters,", "year": 2014 }, { "authors": [ "M. Schuld", "F. Petruccione" ], "title": "Supervised Learning with Quantum Computers", "venue": null, "year": 2018 }, { "authors": [ "Shai Shalev-Shwartz", "Yoram Singer", "Nathan Srebro", "Andrew Cotter" ], "title": "Pegasos: Primal estimated sub-gradient solver for svm", "venue": "Mathematical Programming,", "year": 2011 }, { "authors": [ "Johan AK Suykens", "Joos Vandewalle" ], "title": "Least squares support vector machine classifiers", "venue": "Neural Processing Letters,", "year": 1999 } ]
[ { "heading": null, "text": "Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches for addressing this issue is to use semi-supervised learning, which leverages not only the labeled samples, but also unlabeled feature vectors. Here, we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines. The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss, while maintaining the same quantum speedup as the Quantum LS-SVM." }, { "heading": "1 INTRODUCTION", "text": "Data sets used for training machine learning models are becoming increasingly large, leading to continued interest in fast methods for solving large-scale classification problems. One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory. In parallel to research on efficient architectures for quantum memory (Blencowe, 2010), work on quantum machine learning algorithms and on quantum learning theory is under way (see for example Refs. (Biamonte et al., 2017; Dunjko & Briegel, 2018; Schuld & Petruccione, 2018) and (Arunachalam & de Wolf, 2017) for review). An early example of this approach is Quantum LS-SVM (Rebentrost et al., 2014a), which achieves exponential speedup compared to classical LS-SVM algorithm. Quantum LS-SVM uses quadratic least-squares loss and squared-L2 regularizer, and the optimization problem can be solved using the seminal HHL (Harrow et al., 2009) algorithm for solving quantum linear systems of equations. While progress has been made in quantum algorithms for supervised learning, it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting (Perdomo-Ortiz et al., 2018).\nIn many domains, the most laborious part of assembling a training set is the collection of sample labels. Thus, in many scenarios, in addition to the labeled training set of size m we have access to many more feature vectors with missing labels. One way of utilizing these additional data points to improve the classification model is through semi-supervised learning. In semi-supervised learning, we are given m observations x1, ..., xm drawn from the marginal distribution p(x), where the l (l m) first data points come with labels y1, ..., yl drawn from conditional distribution p(y|x). Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples. In the approach considered here, the training samples are connected by a graph that captures their similarity.\nHere, we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model. We start with the existing Quantum LS-SVM (Rebentrost et al., 2014a), and use techniques from sample-based Hamiltonian simulation (Kimmel et al., 2017) to add a semisupervised term based on Laplacian SVM (Melacci & Belkin, 2011). As is standard in quantum machine learning (Li et al., 2019), the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle. We show that, with respect to the oracle, the proposed algorithm achieves the same quantum speedup as LS-SVM, that is, adding the semisupervised term does not lead to increased computational complexity." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 SEMI-SUPERVISED LEAST-SQUARES KERNEL SUPPORT VECTOR MACHINES.", "text": "Consider a problem where we are aiming to find predictors h(x) : X → R that are functions from a RKHS defined by a kernel K. In Semi-Supervised LS-SVMs in RKHS, we are looking for a function h ∈ H that minimizes\nmin h∈H,b∈R\nγ\n2 l∑ i=1 (yi − (h(xi) + b))2 + 1 2 ||h||2H + 1 2 ||∇h||2E ,\nwhere γ is a user define constant allowing for adjusting the regularization strength. The last term captures the squared norm of the graph gradient on the graph G that contains all training samples as vertices, and expresses similarity between samples through, possibly edges Gu,v\n1 2 ||∇h||2E = 1 2 ∑ u∼v Gu,v(h̄u − h̄v)2 = h̄TLh̄,\nwhere h̄u is the function value h(xi) for the vertex u corresponding to training point xi, and L is the combinatorial graph Laplacian matrix such that L[i, j] = Dj −Gi,j . The Representer Theorem states that if H is RKHS defined by kernel K : X × X → R, then the solution minimizing the problem above is achieved for a function h that uses only the representers of the training points, that is, a function of the form h(x) = ∑m j=1 cjKxj (x) = ∑m j=1 cjK(xj , x). Thus, we can translate the problem from RKHS into a constrained quadratic optimization problem over finite, real vectors\nmin c,ξ,b\nγ\n2 m∑ i=1 ξ2i + 1 2 cTKc+ 1 2 cTKLKc s.t. 1− yi b+ m∑ j=1 cjK[i, j] = ξi where l ≤ m is the number of training points with labels (these are grouped at the beginning of the training set), and h̄ = Kc, since function h is defined using representers Kxi . The semi-supervised term, the squared norm of the graph gradient of h, 1/2||∇h||2E , penalizes large changes of function h over edges of graph G. In defining the kernel K and the Laplacian L and in the two regularization terms we use all m samples. On the other hand, in calculating the empirical quadratic loss we only use the first l samples.\nThe solution to the Semi-Supervised LS-SVMs is given by solving the following (m+1)× (m+1) system of linear equations [\n0 1T 1 K +KLK + γ−11 ] [ b α ] = [ 0 y ] , (1)\nwhere y = (y1, ..., ym)T , 1 = (1, ..., 1)T , 1 is identity matrix, K is kernel matrix, L is the graph Laplacian matrix, γ is a hyperparameter and α = (α1, ..., αm)T is the vector of Lagrangian multipliers." }, { "heading": "2.2 QUANTUM COMPUTING AND QUANTUM LS-SVM", "text": "Quantum computers are devices which perform computing according to the laws of quantum mechanics, a mathematical framework for describing physical theories, in language of linear algebra.\nQuantum Systems. Any isolated, closed quantum physical system can be fully described by a unit-norm vector in a complex Hilbert space appropriate for that system; in quantum computing, the space is always finite-dimensional, Cd. In quantum mechanics and quantum computing, Dirac notation for linear algebra is commonly used. In Dirac notation, a vector x ∈ Cd and its complex conjugate xT , which represents a functional Cd → R, are denoted by |x〉 (called ket) and 〈x| (called bra), respectively. We call {|ei〉}di=1 the computational basis, where |ei〉 = (0, ..., 1, ...0)T with exactly one 1 entry in the i-th position. Any |v〉 = (v1, ..., vd)T can be written as |v〉 = ∑d i=1 vi|ei〉; coefficient vi ∈ C are called probability amplitudes. Any unit vector |x〉 ∈ Cd describes a d-level\nquantum state. Such a system is called a pure state. An inner product of |x1〉, |x2〉 ∈ Cd is written as 〈x1|x2〉. A two-level quantum state |ψ〉 = α|0〉 + β|1〉, where |0〉 = (1, 0)T , |1〉 = (0, 1)T and α, β ∈ C, |α|2 + |β|2, is called a quantum bit, or qubit for short. When both α and β are nonzero, we say |ψ〉 is in a superposition of the computational basis |0〉 and |1〉; the two superposition states |+〉 = 1√\n2 (|0〉+ |1〉) and |−〉 = 1√ 2 (|0〉 − |1〉) are very common in quantum computing.\nA composite quantum state of two distinct quantum systems |x1〉 ∈ Cd1 and |x2〉 ∈ Cd2 is described as tensor product of quantum states |x1〉 ⊗ |x2〉 ∈ Cd1 ⊗ Cd2 . Thus, a state of an n-qubit system is a vector in the tensor product space ( C2 )⊗n = C2 ⊗ C2 ⊗ ...⊗ C2, and is written as ∑2n−1 i=0 αi|i〉, where i is expressed using its binary representation; for example for n = 4, we have |2〉 = |0010〉 = |0〉 ⊗ |0〉 ⊗ |1〉 ⊗ |0〉.\nTransforming and Measuring Quantum States. Quantum operations manipulate quantum states in order to obtain some desired final state. Two types of manipulation of a quantum system are allowed by laws of physics: unitary operators and measurements. Quantum measurement, if done in the computational basis, stochastically transforms the state of the system into one of the computational basis states, based on squared magnitudes of probability amplitudes; that is, 1√ 2 (|0〉+ |1〉) will result in |0〉 and |1〉 with equal chance. Unitary operators are deterministic, invertible, normpreserving linear transforms. A unitary operator U models a transformation of a quantum state |u〉 to |v〉 = U|u〉. Note that U|u1〉 + U|u2〉 = U (|u1〉+ |u2〉), applying a unitary to a superposition of states has the same effect as applying it separately to element of the superposition. In quantum circuit model unitary transformations are referred to as quantum gates – for example, one of the most common gates, the single-qubit Hadamard gate, is a unitary operator represented in the computational basis by the matrix\nH := 1√ 2 ( 1 1 1 −1 ) . (2)\nNote that H|0〉 = |+〉 and H|1〉 = |−〉.\nQuantum Input Model. Quantum computation typically starts from all qubits in |0〉 state. To perform computation, access to input data is needed. In quantum computing, input is typically given by a unitary operator that transforms the initial state into the desired input state for the computation – such unitaries are commonly referred to as oracles, and the computational complexity of quantum algorithms is typically measured with access to an oracle as the unit. For problems involving large amounts of input data, such as for quantum machine learning algorithms, an oracle that abstracts random access memory is often assumed. Quantum random access memory (qRAM) uses logN qubits to address any quantum superposition of N memory cell which may contains either quantum or classical information. For example, qRAM allows accessing classical data entries xji in quantum superposition by a transformation\n1 √ mp m∑ i=1 p∑ j=1 |i, j〉|0...0〉 qRAM−−−→ 1√ mp m∑ i=1 p∑ j=1 |i, j〉|xji 〉,\nwhere |xji 〉 is a binary representation up to a given precision. Several approaches for creating quantum RAM are being considered (Giovannetti et al., 2008; Arunachalam et al., 2015; Biamonte et al., 2017), but it is still an open challenge, and subtle differences in qRAM architecture may erase any gains in computational complexity of a quantum algorithm Aaronson (2015).\nQuantum Linear Systems of Equations. Given an input matrix A ∈ Cn×n and a vector b ∈ Cn, the goal of linear system of equations problem is finding x ∈ Cn such that Ax = b. When A is Hermitian and full rank, the unique solution is x = A−1b. If A is not a full rank matrix then A−1 is replaced by the Moore-Penrose pseudo-inverse. HHL algorithm introduced an analogous problem in quantum setting: assuming an efficient algorithm for preparing b as a quantum state b = ∑n i=1 bi|i〉 using dlog ne+ 1 qubits, the algorithm applies quantum subroutines of phase estimation, controlled rotation, and inverse of phase estimation to obtain the state\n|x〉 = A −1|b〉\n‖A−1|b〉 ‖ . (3)\nIntuitively and at the risk of over-simplifying, HHL algorithm works as follows: if A has spectral decomposition A = ∑n i=1 λiviv T i (where λi and vi are corresponding eigenvalues and eigenstates of A), then A−1 maps λivi 7→ 1\nλi vi. The vector b also can be written as the linear combination of the A’s eigenvectors vi as b = ∑n i=1 βivi (we are not required to compute βi).\nThen A−1b = ∑n i=1 βi 1\nλi vi. In general A and A−1 are not unitary (unless all A’s eigenvalues\nhave unit magnitude), therefore we are not able to apply A−1 directly on |b〉. However, since U = eiA = ∑n i=1 e iλiviv T i is unitary and has the same eigenvectors as A and A\n−1, one can implement U and powers of U on a quantum computer by Hamiltonian simulation techniques; clearly for any expected speed-up, one need to enact eiA efficiently. The HHL algorithm uses the phase estimation subroutine to estimate an approximation of λi up to a small error. The Next step computes a conditional rotation on the approximated value of λi and an auxiliary qubit |0〉 and outputs 1 λi |0〉+ √ 1− 1 λ2i |1〉. The last step involves the inverse of phase estimation and quantum measure-\nment for getting rid of garbage qubits and outputs our desired state |x〉 = A−1|b〉 = ∑n i=1 βi 1\nλi vi.\nDensity Operators. Density operator formalism is an alternative formulation for quantum mechanics that allows probabilistic mixtures of pure states, more generally referred to as mixed states. A mixed state that describes an ensemble {pi, |ψ〉i} is written as\nρ = k∑ i=1 pi|ψi〉〈ψi|, (4)\nwhere ∑k i=1 pi = 1 forms a probability distribution and ρ is called density operator, which in a finite-dimensional system, in computational basis, is a semi-definite positive matrix with Tr(ρ) = 1.\nA unitary operator U maps a quantum state expressed as a density operator ρ to UρU†, where U† is the complex conjugate of the operator U .\nPartial Trace of Composite Quantum System. Consider a two-part quantum system in a state described by tensor product of two density operators ρ ⊗ σ. A partial trace, tracing out the second part of the quantum system, is defined as the linear operator that leaves the first part of the system in a state Tr2 (ρ⊗ σ) = ρ tr (σ), where Tr (σ) is the trace of the matrix σ. To obtain Kernel matrix K as a density matrix, quantum LS-SVM (Rebentrost et al., 2014b) relies on partial trace, and on a quantum oracle that can convert, in superposition, each data point {xi}mi=1, xi ∈ Rp to a quantum state |xi〉 = 1‖ xi ‖ ∑p t=1(xi)t|t〉, where (xi)t refers to the tth feature value in data point xi and assuming the oracle is given ‖xi ‖ and yi . Vector of the labels is given in the same fashion as |y〉 = 1‖ y ‖ ∑m i=1 yi|i〉. For preparation the normalized Kernel matrix K ′ = 1 tr(K)K where K = XTX , we need to prepare a quantum state combining all data points in quantum superposition |X〉 = 1√∑m\ni=1‖ xi ‖ 2\n∑m i=1 |i〉⊗‖xi ‖ |xi〉. The normalized Kernel matrix is obtained\nby discarding the training set state,\nK ′ = Tr2(|X〉〈X|) = 1∑m\ni=1 ‖xi ‖ 2 m∑ i,j=1 ‖xi ‖ ‖xj ‖ 〈xi|xj〉|i〉〈j|. (5)\nThe approach used above to construct density matrix corresponding to linear kernel matrix can be extended to polynomial kernels (Rebentrost et al., 2014b).\nLMR Technique for Density Operator Exponentiation. In HHL-based quantum machine learning algorithms , including in the method proposed here, matrix A for the Hamiltonian simulation within the HHL algorithm is based on data. For example, A can contain the kernel matrix K captured in the quantum system as a density matrix. Then, one need to be able to efficiently compute e−iK∆t, where K is scaled by the trace of kernel matrix. Since K is not sparse, a strategy similar to (Lloyd et al., 2014) is adapted for the exponentiation of a non-sparse density matrix:\nTr1 { e−iS∆t(K ⊗ σ)eiS∆t } = σ − i∆t[K,σ] +O ( ∆t2 ) ≈ e−iK∆tσeiK∆t, (6)\nwhere S = ∑ i,j |i〉〈j| ⊗ |j〉〈i| is the swap operator and the facts Tr1 {S(K ⊗ σ)} = Kσ and Tr1 {(K ⊗ σ)S} = σK are used. The equation (6) summarizes the LMR technique: approximating e−iK∆tσeiK∆t up to error O(∆t2) is equivalent to simulating a swap operator S, applying it to the state K ⊗ σ and discarding the first system by taking partial trace operation. Since the swap operator is sparse, its simulation is efficient. Therefore the LMR trick provides an efficient way to approximate exponentiation of a non-sparse density matrix.\nQuantum LS-SVM. Quantum LS-SVM (Rebentrost et al., 2014b) uses partial trace to construct density operator corresponding to the kernel matrixK. Once the kernel matrixK becomes available as a density operator, the quantum LS-SVM proceeds by applying the HHL algorithm for solving the system of linear equations associated with LS-LSVM, using the LMR technique for performing the density operator exponentiation e−iK∆t where the density matrix K encodes the kernel matrix." }, { "heading": "3 QUANTUM SEMI-SUPERVISED LEAST SQUARE SVM.", "text": "Semi-Supervised Least Square SVM involves solving the following system of linear equations[ b α ] = [ 0 1T 1 K +KLK + γ−11 ]−1 [ 0 y ] = A−1 [ 0 y ] (7)\nIn quantum setting the task is to generate |b,α〉 = Â−1|0, y〉, where the normalized  = A Tr(A) . The linear system differs from the one in LS-SVM in that instead of K, we have K + KLK. While this difference is of little significance for classical solvers, in quantum systems we cannot just multiply and then add the matrices and then apply quantum LS-SVM – we are limited by the unitary nature of quantum transformations.\nIn order to obtain the solution to the quantum Semi-Supervised Least Square SVM, we will use the following steps. First, we will read in the graph information to obtain scaled graph Laplacian matrix as a density operator. Next, we will use polynomial Hermitian exponentiation for computing the matrix inverse (K +KLK)−1." }, { "heading": "3.1 QUANTUM INPUT MODEL FOR THE GRAPH LAPLACIAN", "text": "In the semi-supervised model used here, we assume that we have information on the similarity of the training samples, in a form of graph G that uses n edges to connect similar training samples, represented as m vertices. We assume that for each sample, G contains its k most similar other samples, that is, the degree of each vertex is d. To have the graph available as a quantum density operator, we observe that the graph Laplacian L is the Gram matrix of the rows of the m× n graph incidence matrix GI , L = GIGTI . We assume oracle access to the graph adjacency list, allowing us to construct, in superposition, states corresponding to rows of the graph incidence matrix GI\n|vi〉 = 1√ d n∑ t=1 GI [i, t]|t〉.\nThat is, state |vi〉 has probability amplitude 1√d for each edge |t〉 incident with vertex i, and null probability amplitude for all other edges. In superposition, we prepare a quantum state combining rows of the incidence matrix for all vertices, to obtain\n|GI〉 = 1√ md m∑ i=1 |i〉 ⊗ |vi〉\nThe graph Laplacian matrix L, composed of inner products of the rows of GI , is obtained by discarding the second part of the system,\nL = Tr2(|GI〉〈GI |) = 1\nmd m∑ i,j=1 |i〉〈j| ⊗ d〈vi|vj〉 = 1 m m∑ i,j=1 〈vi|vj〉|i〉〈j|. (8)" }, { "heading": "3.2 POLYNOMIAL HERMITIAN EXPONENTIATION FOR SEMI SUPERVISED LEARNING", "text": "For computing the matrix inverse (K + KLK)−1 on a quantum computer that runs our quantum machine algorithm and HHL algorithm as a subroutine, we need to efficiently compute e−i(K+KLK)∆tσei(K+KLK)∆t. For this purpose we adapt the generalized LMR technique for simulating Hermitian polynomials proposed in (Kimmel et al., 2017) to the specific case of e−i(K+KLK)∆tσei(K+KLK)∆t. Simulation of e−iK∆t follows from the original LMR algorithm, and therefore we focus here only on simulation e−iKLK∆t. The final dynamics (K+KLK)−1 can be obtained by sampling from the two separate output states for e−iKLK∆t and e−iK∆t.\nSimulating eiKLK∆t . Let D(H) denote the space of density operators associated with state space H. Let K†,K, L ∈ D(H) be the density operators associated with the kernel matrix and the Laplacian, respectively. We will need two separate systems with the kernel matrix K, to distinguish between them we will denote the first as K† and the second as K; since K is real and symmetric, these are indeed equal. The kernel and Laplacian matrices K†,K, L are not sparse therefore we adapt the generalized LMR technique for simulating Hermitian polynomials for our specific case B = K†LK.\nFor adapting the generalized LMR technique to our problem we need to generate a quantum state ρ′ = |0〉〈0| ⊗ ρ′′+ |1〉〈1| ⊗ ρ′′′ with Tr(ρ′′+ ρ′′′) = 1, such that\nTr1 { Tr3 { e−iS ′∆ (ρ′ ⊗ σ) eiS ′∆ }} = σ − i[B, σ] +O(∆2) = e−iBtσeiBt +O(∆2), (9)\nwhere B = ρ′′− ρ′′′ = 12K †LK + 12KLK † = KLK and S′ := |0〉〈0| ⊗ S + |1〉〈1| ⊗ (−S) is a controlled partial swap in the forward (+S) and backward direction (−S) in time, and\ne−iS ′∆ = |0〉〈0| ⊗ e−iS∆ + |1〉〈1| ⊗ eiS∆. Therefore with one copy of ρ′, we obtain the simulation of e−iB∆ up to error O(∆2). If we choose the time slice ∆ = δ/t and repeating the above procedure for t2/δ times, we are able to simulate e−iBt up to error O(δ) using n = O(t2/δ) copies of ρ′.\nGenerating ρ′ = |0〉〈0| ⊗ ρ′′+ |1〉〈1| ⊗ ρ′′′. Figure 1 shows the quantum circuit for creating ρ′ = |0〉〈0| ⊗ ρ′′+ |1〉〈1| ⊗ ρ′′′ such that Tr(ρ′′+ ρ′′′) = 1 and B = ρ′′− ρ′′′ = KLK.\nThe analysis of the steps preformed by the circuit depicted in Fig.1 is as follows. Let P be the cyclic permutation of three copies of HA that operates as P |j1, j2, j3〉 = |j3, j1, j2〉. In operator form it can be written as\nP := dimHA∑ j1,j2,j3=1 |j3〉 〈j1| ⊗ |j1〉 〈j2| ⊗ |j2〉 〈j3 | (10)\nThe input state to the circuit depicted in Fig. 1 is\n|+〉〈+| ⊗K† ⊗ L⊗K = 1 2 ∑ i,j∈{0,1} |i〉〈j| ⊗K† ⊗ L⊗K.\nApplying P on K†, L,K gives\nI = 1\n2 [|0〉〈0| ⊗K† ⊗ L⊗K + |0〉〈1| ⊗\n( K† ⊗ L⊗K ) P\n+ |1〉〈0| ⊗ P ( K† ⊗ L⊗K ) + |1〉〈1| ⊗ P ( K† ⊗ L⊗K ) P .\nAfter discarding the third and second register sequentially by applying corresponding partial trace operators, we get\nII = Tr2 [Tr3(I)] = |0〉〈0| ⊗ 1 2 K† + |0〉〈1| ⊗ 1 2 K†LK + |1〉〈0| ⊗ 1 2 KLK† + |1〉〈1| ⊗ 1 2 K,\nin this step KLK term where the last line obtained from Tr2 [ Tr3 [( K† ⊗ L⊗K ) P ]] = K†LK,\nTr2 [ Tr3 [ P(K† ⊗ L⊗K) ]] = KLK†,\nTr2 [ Tr3 [ P(K† ⊗ L⊗K) P ]] = K.\nAfter applying a Hadamard gate H = 1√ 2 [(|0〉+ |1〉)〈0|+ (|0〉− |1〉)〈1|] on the first qubit of II , we get\nIII = H ⊗ 1(II)H ⊗ 1 = 1\n2 (|0〉〈0|+ |0〉〈1|+ |1〉〈0|+ |1〉〈1|)⊗ 1 2 K† + 1 2 (|0〉〈0| − |0〉〈1|+ |1〉〈0| − |1〉〈1|)⊗ 1 2 K†LK + 1\n2 (|0〉〈0|+ |0〉〈1| − |1〉〈0| − |1〉〈1|)⊗ 1 2 KLK† + 1 2 (|0〉〈0| − |0〉〈1| − |1〉〈0|+ |1〉〈1|)⊗ 1 2 K\n= |0〉〈0| ⊗ 1 2\n( 1\n2 K† +\n1 2 K†LK + 1 2 KLK† + 1 2 K\n) + |0〉〈1| ⊗ 1\n2\n( 1\n2 K† − 1 2 K†LK + 1 2 KLK† − 1 2 K ) + |1〉〈0| ⊗ 1\n2\n( 1\n2 K† +\n1 2 K†LK − 1 2 KLK† − 1 2 K\n) + |1〉〈1| ⊗ 1\n2\n( 1\n2 K† − 1 2 K†LK − 1 2 KLK† + 1 2 K\n) .\nThe last step is applying a measurement in computational basis {|0〉〈0|, |1〉〈1|} on the first register to obtain our desired state ρ′,\nIV = |0〉〈0| ⊗ 1 2\n( 1\n2 K† +\n1 2 K†LK + 1 2 KLK† + 1 2 K ) + |1〉〈1| ⊗ 1\n2\n( 1\n2 K† − 1 2 K†LK − 1 2 KLK† + 1 2 K ) We can see that by defining ρ′′ = 12 ( 1 2K † + 12K †LK + 12KLK † + 12K ) and ρ′′′ = 1 2 ( 1 2K † − 12K †LK − 12KLK † + 12K ) the final state is in the form of ρ′ = |0〉〈0|⊗ρ′′+|1〉〈1|⊗ρ′′′ where Tr(ρ′′+ ρ′′′) = 1, and we obtain ρ′′− ρ′′′ = 12K †LK + 12KLK † = B. Now with having the output state ρ′ we are ready to apply the generalized LMR in (9) to simulate e−iKLK∆tσeiKLK∆t up to error O(∆2). Comparing the LMR technique in equation (6) with the generalized LMR for the spacial case of KLK in equation (9), we see approximating e−iKLK∆tσeiKLK∆t up to error O(∆t2) is equivalent to simulating the controlled partial swap operator S′, applying it to the state ρ′ ⊗ σ and discarding the third and first systems by taking partial trace operations, respectively. Since S′ is also sparse, and its simulation is efficient, the generalized LMR technique offers an efficient approach for simulating eiKLK∆t.\nAlgorithm 1 Quantum Semi-Supervised LS-SVM Input: The datapoint set {x1, ...xl, ...xm} with the first l data points labeled and the rest unlabeled,\ny = (y1, ..., yl) and the graph G Output: The classifier |α, b〉 = A−1|y〉\n1: Quantum data preparation. Encode classical data points into quantum data points using quantum oracles Ox : {x1, ...xl, ...xm} 7→ |X〉 = 1√∑m\ni=1‖ xi ‖ 2\n∑m i=1 |i〉 ⊗ ‖xi ‖ |xi〉 and\nOx : y 7→ |y〉. 2: Quantum Laplacian preparation. Prepare quantum density matrix using oracle access to G. 3: Matrix inversion. Compute the matrix inversion |α, b〉 = A−1|y〉 via HHL algorithm. A\nquantum circuit for the HHL algorithm has three main steps: 4: Phase estimation, including efficient Hamiltonian simulation involving KLK (Section 3.2) 5: Controlled rotation 6: Uncomputing 7: Classification. Based on Swap test algorithm, same as in Quantum LS-SVM." }, { "heading": "3.3 QUANTUM SEMI-SUPERVISED LS-SVM ALGORITHM AND ITS COMPLEXITY", "text": "The quantum LS-SVM in (Rebentrost et al., 2014b) offers exponential speedup O(logmp) over the classical time complexity for solving SVM as a quadratic problem, which requires time O(log( −1)poly(p,m)), where is the desired error. The exponential speedup in p occurs as the result of fast quantum computing of kernel matrix, and relies on the existence of efficient oracle access to data. The speedup on m is due to applying quantum matrix inversion for solving LS-SVM, which is inherently due to fast algorithm for exponentiation of a resulting non-sparse matrix. Our algorithm introduces two additional steps: preparing the Laplacian density matrix, and Hamiltonian simulation for KLK. The first step involves oracle access to a sparse graph adjacency list representation, which is at least as efficient as the oracle access to non-sparse data points. The Hamiltonian simulation involves simulating a sparse conditional partial swap operator, which results an efficient strategy for applying e−iKLK∆t in time Õ(log(m)∆t), where the notation Õ hides more slowly growing factors in the simulation (Berry et al., 2007)." }, { "heading": "3.4 COMPARISON WITH ALTERNATIVE APPROACHES", "text": "Considerable effort has been devoted into designing fast classical algorithms for training SVMs. The decomposition-based methods such as SMO (Platt, 1998) are able to efficiently manage problems with large number of features p, but their computational complexities are super-linear in m. Other training strategies (Suykens & Vandewalle, 1999; Fung & Mangasarian, 2005; Keerthi & DeCoste, 2005) are linear in m but scale quadratically in p in the worst case. The Pegasos algorithm (ShalevShwartz et al., 2011) for non-linear kernel improves the complexity to Õ (m/(λ )), where λ, and are the regularization parameter of SVM and the error of the solution, respectively.\nBeyond the classical realm, three quantum algorithms for training linear models have been proposed, the quantum LS-SVM that involves L2 regularizer (Rebentrost et al., 2014a), a recently proposed Quantum Sparse SVM which is limited to a linear kernel (Arodz & Saeedi, 2019), and a quantum training algorithm that solves a maximin problem resulting from a maximum – not average – loss over the training set (Li et al., 2019)." } ]
2,019
null
SP:e58dc2d21175a62499405b7f4c3a03b135530838
[ "This paper proposes to employ the likelihood of the latent representation of images as the optimization target in the Glow (Kingma and Dhariwal, 2018) framework. The authors argue that to optimize the ''proxy for image likelihood'' has two advantages: First, the landscapes of the surface are more smooth; Second, a latent sample point in the regions that have a low likelihood is able to generate desired outcomes. In the experimental analysis, the authors compare their proposed method with several baselines and show prior performance.", "This paper investigates the performance of invertible generative models for solving inverse problems. They argue that their most significant benefit over GAN priors is the lack of representation error that (1) enables invertible models to perform well on out-of-distribution data and (2) results in a model that does not saturate with increased number of measurements (as observed with GANs). They use a pre-trained Glow invertible network for the generator and solve a proxy for the maximum likelihood formulation of the problem, where the likelihood of an image is replaced by the likelihood of its latent representation. They demonstrate results on problems such as denoising, inpainting and compressed sensing. In all these applications, the invertible network consistently outperforms DCGAN across all noise levels/number of measurements. Furthermore, they demonstrate visually reasonable results on natural images significantly different from those in the training dataset." ]
Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Our formulation is an empirical risk minimization that does not directly optimize the likelihood of images, as one would expect. Instead we optimize the likelihood of the latent representation of images as a proxy, as this is empirically easier. For compressive sensing, our formulation can yield higher accuracy than sparsity priors across almost all undersampling ratios. For the same accuracy on test images, they can use 10-20x fewer measurements. We demonstrate that invertible priors can yield better reconstructions than sparsity priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.
[]
[ { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Shervin Minaee", "Amirali" ], "title": "Abdolrashidi. Finger-gan: Generating realistic fingerprint images using connectivity imposed gan", "venue": "arXiv preprint arXiv:1812.10482,", "year": 2018 }, { "authors": [ "Hoo-Chang Shin", "Neil A Tenenholtz", "Jameson K Rogers", "Christopher G Schwarz", "Matthew L Senjem", "Jeffrey L Gunter", "Katherine P Andriole", "Mark Michalski" ], "title": "Medical image synthesis for data augmentation and anonymization using generative adversarial networks", "venue": "In International Workshop on Simulation and Synthesis in Medical Imaging,", "year": 2018 }, { "authors": [ "Yuhua Chen", "Feng Shi", "Anthony G Christodoulou", "Yibin Xie", "Zhengwei Zhou", "Debiao Li" ], "title": "Efficient and accurate mri super-resolution using a generative adversarial network and 3d multilevel densely connected network", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Ashish Bora", "Ajil Jalal", "Eric Price", "Alexandros G Dimakis" ], "title": "Compressed sensing using generative models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Paul Hand", "Vladislav Voroninski" ], "title": "Global guarantees for enforcing deep generative priors by empirical risk", "venue": "arXiv preprint arXiv:1705.07576,", "year": 2017 }, { "authors": [ "Paul Hand", "Oscar Leong", "Vlad Voroninski" ], "title": "Phase retrieval under a generative prior", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Muhammad Asim", "Fahad Shamshad", "Ali Ahmed" ], "title": "Blind image deconvolution using deep generative priors", "venue": "arXiv preprint arXiv:1802.04073,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "arXiv preprint arXiv:1802.07088,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kostadin Dabov", "Alessandro Foi", "Karen Egiazarian" ], "title": "Video denoising by sparse 3d transformdomain collaborative filtering", "venue": "In 2007 15th European Signal Processing Conference,", "year": 2007 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The Caltech-UCSD Birds-200-2011 Dataset", "venue": null, "year": 2011 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "Sixth Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural networks", "venue": "arXiv preprint arXiv:1808.04730,", "year": 2018 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Reinhard Heckel", "Paul Hand" ], "title": "Deep decoder: Concise image representations from untrained non-convolutional networks", "venue": "arXiv preprint arXiv:1810.03982,", "year": 2018 }, { "authors": [ "ShahRukh Athar", "Evgeniy Burnaev", "Victor Lempitsky" ], "title": "Latent convolutional models", "venue": "arXiv preprint arXiv:1806.06284,", "year": 2018 }, { "authors": [ "Nilsback", "Zisserman", "Birds Wah" ], "title": "Flowers dataset contains 8189 color images resized to 64×64 out of which 500 images are spared for testing. Birds dataset contains a total of 11,788 images, which were center aligned and resized to 64× 64 out of which 5794 images are set aside for testing", "venue": null, "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative deep neural networks have shown remarkable performance as natural signal priors in imaging inverse problems, such as denoising, inpainting, compressed sensing, blind deconvolution, and phase retrieval. These generative models can be trained from datasets consisting of images of particular natural signal classes, such as faces, fingerprints, MRIs, and more (Karras et al., 2017; Minaee and Abdolrashidi, 2018; Shin et al., 2018; Chen et al., 2018). Some such models, including variational autoencoders (VAEs) and generative adversarial networks (GANs), learn an explicit low-dimensional manifold that approximates a natural signal class (Goodfellow et al., 2014; Kingma and Welling, 2013; Rezende et al., 2014). We will refer to such models as GAN priors. With an explicit parameterization of the natural signal manifold by a low dimensional latent representation,\nthese generative models allow for direct optimization over a natural signal class. Consequently, they can obtain significant performance improvements over non-learning based methods. For example, GAN priors have been shown to outperform sparsity priors at compressed sensing with 5-10x fewer measurements. Additionally, GAN priors have led to theory for signal recovery in the linear compressive sensing and nonlinear phase retrieval problems (Bora et al., 2017; Hand and Voroninski, 2017; Hand et al., 2018), and they have also shown promising results for the nonlinear blind image deblurring problem (Asim et al., 2018).\nA significant drawback of GAN priors for solving inverse problems is that they can have representation error or bias due to architecture and training. This can happen for many reasons, including because the generator only approximates the natural signal manifold, because the natural signal manifold is of higher dimensionality than modeled, because of mode collapse, or because of bias in the training dataset itself. As many aspects of generator architecture and training lack clear principles, representation error of GANs may continue to be a challenge even after substantial hand crafting and engineering. Additionally, learning-based methods are particularly vulnerable to the biases of their training data, and training data, no matter how carefully collected, will always contain degrees of bias. As an example, the CelebA dataset (Liu et al., 2015) is biased toward people who are young, who do not have facial hair or glasses, and who have a light skin tone. As we will see, a GAN prior trained on this dataset learns these biases and exhibits image recovery failures because of them.\nIn contrast, invertible neural networks can be trained as generators with zero representation error. These networks are invertible (one-to-one and onto) by architectural design (Dinh et al., 2016; Gomez et al., 2017; Jacobsen et al., 2018; Kingma and Dhariwal, 2018). Consequently, they are capable of recovering any image, including those significantly out-of-distribution relative to a biased training set; see Figure 1. We call the domain of an invertible generator the latent space, and we call the range of the generator the signal space. These must have equal dimensionality. Flow-based invertible generative models are composed of a sequence of learned invertible transformations. Their strengths include: their architecture allows exact and efficient latent-variable inference, direct loglikelihood evaluation, and efficient image synthesis; they have the potential for significant memory savings in gradient computations; and they can be trained by directly optimizing the likelihood of training images. This paper emphasizes an additional strength: because they lack representation error, invertible models can mitigate dataset bias and improve performance on inverse problems with out-of-distribution data.\nIn this paper, we study generative invertible neural network priors for imaging inverse problems. We will specifically use the Glow architecture, though our framework could be used with other architectures. A Glow-based model is composed of a sequence of invertible affine coupling layers, 1x1 convolutional layers, and normalization layers. Glow models have been successfully trained to generate high resolution photorealistic images of human faces (Kingma and Dhariwal, 2018).\nWe present a method for using pretrained generative invertible neural networks as priors for imaging inverse problems. The invertible generator, once trained, can be used for a wide variety of inverse problems, with no specific knowledge of those problems used during the training process. Our method is an empirical risk formulation based on the following proxy: we penalize the likelihood of an image’s latent representation instead of the image’s likelihood itself. While this may be couterintuitive, it admits optimization problems that are easier to solve empirically. In the case of compressive sensing, our formulation succeeds even without direct penalization of this proxy likelihood, with regularization occuring through initialization of a gradient descent in latent space.\nWe train a generative invertible model using the CelebA dataset. With this fixed model as a signal prior, we study its performance at denoising, compressive sensing, and inpainting. For denoising, it can outperform BM3D (Dabov et al., 2007). For compressive sensing on test images, it can obtain higher quality reconstructions than Lasso across almost all subsampling ratios, and at similar reconstruction errors can succeed with 10-20x fewer measurements than Lasso. It provides an improvement of about 2x fewer linear measurements when compared to Bora et al. (2017). Despite being trained on the CelebA dataset, our generative invertible prior can give higher quality reconstructions than Lasso on out-of-distribution images of faces, and, to a lesser extent, unrelated natural images. Our invertible prior outperforms a pretrained DCGAN (Radford et al., 2015) at face inpainting and exhibits qualitatively reasonable results on out-of-distribution human faces. We provide additional experiments in the appendix, including for training on other datasets." }, { "heading": "2 METHOD AND MOTIVATION", "text": "We assume that we have access to a pretrained generative invertible neural network G : Rn → Rn. We write x = G(z) and z = G−1(x), where x ∈ Rn is an image that corresponds to the latent representation z ∈ Rn. We will consider a G that has the Glow architecture introduced in Kingma and Dhariwal (2018). It can be trained by direct optimization of the likelihood of a collection of training images of a natural signal class, under a standard Gaussian distribution over the latent space. We consider recovering an image x from possibly-noisy linear measurements given by A ∈ Rm×n,\ny = Ax+ η,\nwhere η ∈ Rm models noise. Given a pretrained invertible generator G, we have access to likelihood estimates for all images x ∈ Rn. Hence, it is natural to attempt to solve the above inverse problem by a maximum likelihood formulation given by\nmin x∈Rn\n‖Ax− y‖2 − γ log pG(x), (1)\nwhere pG is the likelihood function over x induced by G, and γ is a hyperparameter. We have found this formulation to be empirically challenging to optimize; hence we study the following proxy:\nmin z∈Rn\n‖AG(z)− y‖2 + γ‖z‖. (2)\nUnless otherwise stated, we initialize (2) at z0 = 0.\nThe motivation for formulation (2) is as follows. As a proxy for the likelihood of an image x ∈ Rn, we will use the likelihood of its latent representation z = G−1(x). Because the invertible network G was trained to map a standard normal in Rn to a distribution over images, the log-likelihood of a point z is proportional to ‖z‖2. Instead of penalizing ‖z‖2, we alternatively penalize the unsquared ‖z‖. In Appendix B, we show comparable performance for both the squared and unsquared formulations.\nIn principle, our formulation has an inherent flaw: some high-likelihood latent representations z correspond to low-likelihood images x. Mathematically, this comes from the Jacobian term that relates the likelihood in z to the likelihood in x upon application of the map G. For multimodel distributions, such images must exist, which we will illustrate in the discussion. This proxy formulation relies on the fact that the set of such images has low probability and that they are inconsistent with enough provided measurements. Surprisingly, despite this potential weakness, we will observe image reconstructions that are superior to BM3D and GAN-based methods at denoising, and superior to GAN-based and Lasso-based methods at compressive sensing.\nIn the case of compressive sensing and inpainting, we take γ = 0 in formulation (2). The motivation for such a formulation initialized at z0 = 0 is as follows. There is a manifold of images that are consistent with the provided measurements. We want to find the image x of highest likelihood on this manifold. Our proxy turns the likelihood maximization task over an affine space in x into the geometric task of finding the point on a manifold in z-space that is closest to the origin with respect to the Euclidean norm. In order to approximate that point, we run a gradient descent in z down the data misfit term starting at z0 = 0.\nIn the case of GAN priors for G : Rk → Rn, we will use the formulation from Bora et al. (2017), which is the formulation above in the case where the optimization is performed over Rk, γ = 0, and initialization is selected randomly.\nAll the experiments that follow will be for an invertible model we trained on the CelebA dataset of celebrity faces, as in Kingma and Dhariwal (2018). Similar results for models trained on birds and flowers (Wah et al., 2011; Nilsback and Zisserman, 2008) can be found in the appendix. Due to computational considerations, we run experiments on 64 × 64 color images with the pixel values scaled between [0, 1]. The train and test sets contain a total of 27,000 and 3,000 images, respectively. We trained a Glow architecture (Kingma and Dhariwal, 2018); see Appendix A for details. Once trained, the Glow prior is fixed for use in each of the inverse problems below. We also trained a DCGAN for the same dataset. We solve (2) using LBFGS, which was found to outperform Adam (Kingma and Ba, 2014). DCGAN results are reported for an average of 3 runs because we observed some variance due to random initialization." }, { "heading": "3 APPLICATIONS", "text": "" }, { "heading": "3.1 DENOISING", "text": "We consider the denoising problem with A = I and η ∼ N (0, σ2I), for images x in the CelebA test dataset. We evaluate the performance of a Glow prior, a DCGAN prior, and BM3D for two different noise levels. Figure 2 shows the recovered PSNR values as a function of γ for denoising by the Glow and DCGAN priors, along with the PSNR by BM3D. The figure shows that the performance of the regularized Glow prior increases with γ, and then decreases. If γ is too low, then the network fits to the noise in the image. If γ is too high, then data fit is not enforced strongly enough. The left panel reveals that an appropriately regularized Glow prior can outperform BM3D by almost 2 dB. The experiments also reveal that appropriately regularized Glow priors outperform the DCGAN prior, which suffers from representation error and is not aided by the regularization. The right panel confirms that with smaller noise levels, less regularization is needed for optimal performance. A\nvisual comparison of the recoveries at the noise level σ = 0.1 using Glow, DCGAN priors, and BM3D can be seen in Figure 3. Note that the recoveries with Glow are sharper than BM3D. See Appendix B for more quantitative and qualitative results." }, { "heading": "3.2 COMPRESSED SENSING", "text": "In compressed sensing, one is given undersampled linear measurements of an image, and the goal is to recover the image from those measurements. In our notation, A ∈ Rm×n with m < n. As the\nimage x is undersampled, there is an affine space of images consistent with the measurements, and an algorithm must select which is most ‘natural.’ A common proxy for naturalness in the literature has been sparsity with respect to the DCT or wavelet bases. With a GAN prior, an image is considered natural if it lies in or near the range of the GAN. For an invertible prior under our proxy for likelihood, we consider an image to be natural if it has a latent representation of small norm.\nWe study compressed sensing in the case that A is an m× n matrix of i.i.d. N (0, 1/m) entries, and x is an image from the CelebA test set. Here, n = 64× 64× 3 = 12288. We consider the case where η is standard iid Gaussian random noise normalized such that √ E‖η‖2 = 0.1. We compare Glow, DCGAN, and Lasso1 with respect to the DCT and wavelet bases.\nOur main result is that the Glow prior with γ = 0 and initialization z0 = 0 outperforms both DCGAN and Lasso in reconstruction quality over all undersampling ratios, as shown in the left panel of Figure 4. Surprisingly, in the case of extreme undersampling, Glow substantially outperforms these methods even though it does not maintain a direct low-dimensional parameterization of the signal manifold. The Glow prior (1) can result in 15 dB higher PSNRs than DCGAN, and (2) can give comparable recovery errors with 2-3x fewer measurements at high undersampling ratios. This difference is explained by the representation error of DCGAN, which has been shown to be the dominant source of error in DCGAN by Bora et al. (2017). Additional plots and visual comparisons, available in Appendix C, show notable improvements in quality of in- and out-of-distribution images using an invertible prior relative to DCGAN and Lasso.\nWe conducted several additional experiments to understand the regularizing effects of γ and the initialization z0. The right panel of Figure 4 shows the PSNRs under multiple initialization strategies: z0 = 0, z0 ∼ N (0, 0.12I), z0 ∼ N (0, 0.72I), z0 = G−1(x0) with x0 given by the solution to Lasso with respect to the wavelet basis, and z0 = G−1(x0) where x0 is x perturbed by a random point in the null space of A. The best performance was observed with initialization z0 = 0. The hyperparameter γ can be taken to be zero, which is surprising because then there is no direct penalization of likelihood for this noisy compressive sensing problem. In the case of γ = 0, we observe that larger initializations result in recovered images of lower PSNR. See Appendix C for additional experiments that show this effect. We observe that initialization strategy can have a strong qualitative effect on the recovery formulation. For example, if the optimization is initialized by the solution to the Lasso, then directly penalizing the likelihood of z can improve reconstruction PSNR, though those reconstruction are still worse than with initialization z0 = 0 and γ = 0. Suboptimal initialization procedures apparently benefit from direct penalization of likelihood, whereas the z0 = 0 initialization apparently does not.\nFinally, we observe that the Glow prior is much more robust to out-of-distribution examples than the GAN Prior. Figure 5 shows recovered images using (2) for compressive sensing for images not belonging to the CelebA dataset. DCGAN’s performance reveals biases of the underlying dataset and limitations of low-dimensional modeling. For example, projecting onto the CelebA-trained DCGAN can cause incorrect skin tone, gender, and age. It’s performance on out-of-distribution images is poor.\n1The inverse problems with Lasso were solved by minz ‖AΦz − y‖22 + 0.01‖z‖1 using coordinate descent.\nIn contrast, the Glow prior mitigates this bias, even demonstrating image recovery for natural images that are not representative of the CelebA training set, including people who are older, have darker skin tones, wear glasses, have a beard, or have unusual makeup. The Glow prior’s performance also extends to significantly out-of-distribution images, such as animated characters and natural images unrelated to faces. See Appendix C.2 for additional experiments." }, { "heading": "3.3 INPAINTING", "text": "In inpainting, one is given a masked image of the form y =M x, where M is a masking matrix with binary entries and x ∈ Rn is an n-pixel image. The goal is to find x. We could rewrite (2) with γ = 0 as\nmin z∈Rn\n‖y −M G(z)‖2\nThere is an affine space of images consistent with the measurements, and an algorithm must select which is most natural. As before, using the minimizer ẑ, the estimated image is given by G(ẑ). Our experiments reveal the same story as for compressed sensing. If initialized at z0 = 0, then the empirical risk formulation with γ = 0 exhibits high PSNRs on test images. Algorithmic regularization is again occurring due to initialization. In contrast, DCGAN is limited by its representation error. See Figure 6, and Appendix D for more results, including visually reasonable face inpainting, even for out-of-distribution human faces." }, { "heading": "4 DISCUSSION", "text": "We have demonstrated that pretrained generative invertible models can be used as natural signal priors in imaging in-\nverse problems. Their strength is that every desired image is in the range of an invertible model, and the challenge that they overcome is that every undesired image is also in the range of the model and no explicit low-dimensional representation is kept. We study a regularization for empirical loss minimization that promotes recovery of images that have a high value of a proxy for image likelihood\nunder the generative model. We demonstrate that this formulation can quantitatively and qualitatively outperform BM3D at denoising. Additionally, it has lower recovery errors than Lasso across all levels of undersampling, and it can get comparable errors from 10-20x fewer measurements, which is a 2x reduction from Bora et al. (2017). The superior recovery performance of the invertible prior at very extreme undersampling ratios is particularly surprising given that invertible nets do not maintain explicit low dimensional representations, as GANs do. Additionally, our trained invertible model yields significantly better reconstructions than Lasso even on out-of-distribution images, including images with rare features of variation, and on unrelated natural images.\nThe idea of analyzing inverse problems with invertible neural networks has appeared in Ardizzone et al. (2018). The authors study estimation of the complete posterior parameter distribution under a forward process, conditioned on observed measurements. Specifically, the authors approximate a particular forward process by training an invertible neural network. The inverse map is then directly available. In order to cope with information loss, the authors augment the measurements with additional variables. This work differs from ours because it involves training a separate net for every particular inverse problem. In contrast, our work studies how to use a pretrained invertible generator for a variety of inverse problems not known at training time. Training invertible networks is challenging and computationally expensive; hence, it is desirable to separate the training of off-the-shelf invertible models from potential applications in a variety of scientific domains.\nWhy optimize a proxy for image likelihood instead of optimizing image likelihood directly?\nAs noted in Section 2, the immediate formulation one would write down for inverse problems under an invertible prior is to optimize a data misfit term together with an image log-likelihood term. Unfortunately, we found it difficult to get this optimization to converge in practice. The likelihood term can exhibit rapid variation due to the Jacobian of the transformation z 7→ x = G(z); additionally the likelihood term may in principle even contain local minima or other geometric properties that make gradient descent difficult. Figure 7 compares the loss landscapes in x and z, illustrating that the learned likelihood function in x may lead to difficulty in choosing appropriate step sizes for gradient descent algorithms.\nIn contrast, there are nice geometric properties that appear in latent space from an invertible model. As an illustration, consider the compressive sensing problem with noiseless measurements. Here, the formulation corresponds to a gradient descent down the data misfit term ‖AG(z)− y‖2 starting at z0 = 0. This data misfit term has a favorable geometry for optimization in that all local minima are global minima. This is because the level sets in z of ‖AG(z)− y‖2 are given by G−1 applied to the level sets in x of ‖Ax − y‖2, which have a simple structure because of the linearity of the measurements in x. There may be additional benefits due to optimizing in z because the invertible net learns representations that permit interpolation between images and semantically meaningful arithmetic, as reported in Kingma and Dhariwal (2018).\nWhy is the likelihood of an image’s latent representation a reasonable proxy for the image’s likelihood?\nThe training process for an invertible generative model attempts to learn a target distribution in images space by directly maximizing the likelihood of provided samples from that distribution, given a standard Gaussian prior in latent space. High probability regions in latent space map to regions in image space of equal probability. Hence, broadly speaking, regions of small values of ‖z‖ are expected to map to regions of large likelihoods in image space. There will be exceptions to this\nproperty. For example, natural image distributions have a multimodal character. The preimage of high probability modes in image space will correspond to high likelihood regions in latent space. Because the generator G is invertible and continuous, interpolation in latent space of these modes will provide images of high likelihood in z but low likelihood in the target distribution. To illustrate this point, we trained a Real-NVP (Dinh et al., 2016) invertible neural network on the two dimensional set of points depicted in Figure 8 (left panel). The middle and right panels show that high likelihood regions in latent space generally correspond to higher likelihood regions in image space, but that there are some regions of high likelihood in latent space that map to points of low likelihood in image space and in the target distribution. We see that the spurious regions are of low total probability and would be unlikely to be the desired outcomes of an inverse problem arising from the target distribution.\nHow can solving compressive inverse problems be successful without direct penalization of the proxy image likelihood?\nIf there are fewer linear measurements than the dimensionality of the desired signal, an affine space of images is consistent with the measurements. In our formulation, regularization does not occur by direct penalization of our proxy for image likelihood; instead, it occurs implicitly by performing the optimization in z-space with an initialization of z0 = 0. The set of latent representations z that are consistent with the compressive measurements define a m-dimensional nonlinear manifold. As per the likelihood proxy mentioned above, the spirit of our formulation is to find the point on this manifold that is closest to the origin with respect to the Euclidean norm. Our specific way of estimating this point is to perform a gradient descent down a data misfit term in z-space, starting at the origin. While a gradient flow typically will not find the closest point on the manifold, it empirically finds a reasonable approximation of that point. In practice, one could further do a local search to refine the output of this gradient flow, but we elect not to do so for the sake of simplicity.\nWhy does the invertible prior do so well, especially on out-of-distribution images?\nOne reason that the invertible prior performs so well is because it has no representation error. The lack of representation error of invertible nets presents a significant opportunity for imaging with a learned prior. Any image is potentially recoverable, even if the image is significantly outside of the training distribution. In contrast, methods based on projecting onto an explicit low-dimensional representation of a natural signal manifold will have representation error, perhaps due to modeling assumptions, mode collapse, or bias in a training set. Such methods will see performance prematurely saturate as the number of measurements increases. In contrast, an invertible prior would not see performance saturate. In the extreme case of having a full set of exact measurements, an invertible prior could in principle recover any image exactly.\nIt is natural to wonder which images can be effectively recovered using an invertible prior trained on a particular signal class. As expected, we see the best reconstruction errors on in-distribution images and performance degrades as images get further out-of-distribution. Nonetheless, we observe that reconstruction errors of unrelated natural images are still of higher quality than with the Lasso. It appears that the invertible generator learns some general attributes of natural images. This leads to several questions: when a generative invertible net is trained, how far out-of-distribution can an image be while maintaining a high likelihood? How do invertible nets learn useful statistics of natural images? Is that due primarily to training, or is there architectural bias toward natural images, as with the Deep Image Prior and Deep Decoder (Ulyanov et al., 2018; Heckel and Hand, 2018)?\nThe results of this paper provide further evidence that reducing representational error of generators can significantly enhance the performance of generative models for inverse problems in imaging. This idea was also recently explored in Athar et al. (2018), where the authors trained a GAN-like prior with a high-dimensional latent space. The high dimensionality of this space lowers representational error, though it is not zero. In their work, the high-dimensional latent space had a structure that was difficult to directly optimize, so the authors successfully modeled latent representations as the output of an untrained convolutional neural network whose parameters are estimated at test time. Their paper and ours raises several questions: Which generator architectures provide a good balance between low representation error, ease of training, and ease of inversion? Should a generative model be capable of producing all images in order to perform well on out-of-distribution images of interest? Are there cheaper architectures that perform comparably? These questions are quite important, as solving equation 2 in our 64×64 pixel color images experiments took 15 GPU-minutes. New developments are needed on architectures and frameworks in between low-dimensional generative priors and fully invertible generative priors. Such methods could leverage the strengths of invertible models while being much cheaper to train and use." }, { "heading": "A EXPERIMENTAL SETUP", "text": "Simulations were completed mainly on CelebA-HQ dataset, used in Kingma and Dhariwal (2018); it has 30,000 color images that were resized to 64× 64 for computational reasons, and were split into 27,000 training and 3000 test images. We also provide some additional experiments on the Flowers Nilsback and Zisserman (2008), and Birds Wah et al. (2011) datasets. Flowers dataset contains 8189 color images resized to 64×64 out of which 500 images are spared for testing. Birds dataset contains a total of 11,788 images, which were center aligned and resized to 64× 64 out of which 5794 images are set aside for testing.\nWe specifically model our invertible networks after the recently proposed Glow Kingma and Dhariwal (2018) architecture, which consists of a multiple flow steps. Each flow step comprises of an activation normalization layer, a 1 × 1 convolutional layer, and an affine coupling layer, each of which is invertible. Let K be the number of steps of flow before a splitting layer, and L be the number of times the splitting is performed. To train over CelebA, we choose the network to have K = 48, L = 4 and affine coupling, and train it with a learning rate 0.0001, and a batch size 6 at resolution 64× 64× 3. The model was trained over 5−bit images with 10,000 warmup iterations as in Kingma and Dhariwal (2018), but when solving inverse problems using Glow original 8−bit images were used. We refer the reader to Kingma and Dhariwal (2018) for specific details on the operations performed in each of the network layer.\nWe use LBFGS to solve the inverse problem. For best performance, we set the number of iterations and learning rate for denoising, compressed sensing, and inpainting to be 20, 1; 30, 0.1; and 20, 1; respectively. we use Pytorch to implement Glow network training and solve the inverse problem. Glow training was conducted on a single Titan Xp GPU using a maximum allowable (under given computational constraints) batch size of 6. In case of CS, recovering a single image on Titan Xp using LBFGS solver with 30 steps takes 889.125 seconds (14.82 minutes). However, we can solve 6 inverse problems in parallel on the given hardware platform.\nUnless specified otherwise, inverse problem under Glow prior is always initialized with z0 = 0. Whereas under DCGAN prior, we initialize with z0 ∼ N (0, 0.12I) and report average over three random restarts. In all the quantitative experiments over, the reported quality metrics such as PSNR, and reconstruction errors are averaged over 12 randomly drawn test set images." }, { "heading": "B DENOISING: ADDITIONAL EXPERIMENTS", "text": "We present additional quantitative experiments on image denoising here. Complete set of experiments on average PSNR over 12 CelebA (within distribution2) test set images versus penalization parameter γ under noise levels σ = 0.01, 0.05, 0.1, and 0.2 are presented in Figure 10 below. The central message is that Glow prior outperforms DCGAN prior uniformly across all γ due to the representation limit of DCGAN. In addition, striking the right balance between the misfit term and the penalization term by appropriately choosing γ improves the performance of Glow, and it also approaches stateof-the-art BM3D algorithm at low noise levels, and clearly visible in higher noise, for example, at a noise level of σ = 0.2, the Glow prior improves upon BM3D by 2dB. Visually the results of Glow prior are clearly even superior to BM3D recoveries that are generally blurry and over smoothed as can be spotted in the qualitative results below. To avoid fitting the noisy image using the Glow model, we force the recoveries to be natural by choosing large enough γ.\n2The redundant ’within distribution’ phrase is added to emphasize that the test set images are drawn from the same distribution as the train set. We do this to avoid confusion with the out-of-distribution recoveries also presented in this paper.\nRecall that we are solving a regularized empirical risk minimization program\nargmin z∈Domain(G)\n‖y −AG(z)‖2 + γ‖z‖.\nIn general, one can instead solve argmin z∈Domain(G)\n‖y−AG(z)‖2+H(‖z‖), whereH(·) is a monotonically\nincreasing function. Figure 11 shows the comparison of most common choices of linear (already used in the rest of the paper), and quadratic H in the context of densoing. We find that the highest achievable PSNR remains the same in both the cases, however, the penalization parameter γ has to be adjusted accordingly.\nWe train Glow and DCGAN on CelebA. Additional qualitative image denosing results under higher noise level σ = 0.1 and 0.2 comparing Glow prior against DCGAN prior, and BM3D are presented below in Figure 12, and 13.\nWe also trained Glow model on Flowers dataset. Below we present its qualitative denoising performance against BM3D on the test set Flowers images. We also show the effect of varying γ — smaller γ leads to overfitting and vice versa." }, { "heading": "C COMPRESSED SENISNG: ADDITIONAL EXPERIMENTS", "text": "Some additional quantitative image recovery results on test set of CelebA dataset are presented in Figure 15; it depicts the comparison of Glow prior, DCGAN prior, LASSO-DCT, and LASSO-WVT at compressed sensing. We plot the reconstruction error : = 1n‖x − x̂‖22, where x̂ is the recovered image and n = 12288 is the number of pixels in the 64× 64× 3 CelebA images. Glow uniformly outperforms DCGAN, and LASSO across entire range of the number of measuremnts. LASSODCT and LASSO-WVT eventually catch up to Glow but only when observed measurements are a significant fraction of the total number of pixels. On the other hand, DCGAN is initially better than LASSO but prematurely saturates due to limited representation capacity.\nSurprisingly, we observe that no explicit penalization of likelihood is necessary for compressive sensing with an invertible generative prior under formulation equation 2. That is, we can take γ = 0 when the optimization is initialized at z0 = 0. This indicates that algorithmic regularization is occurring and that initialization plays a role.We performed some additional experiments to study the role of initialization. The left panel in Figure 19 shows that as the norm of the latent initialization increases, the norm of the recovered latent representation increases and the PSNR of the recovered image decreases. Moreover, the right panel in Figure 19 shows the norm of the estimated latent representation at each iteration of the optimization. In all our experiments, it monotonically grows versus iteration number. These experiments provide further evidence that smaller latent initializations lead to outputs that are more natural and have smaller latent representations.\nRecall that the natural face images correspond to smaller z0. In Figure 20, we plot the norm of the latent codes of the iterates of each algorithm vs. the number of iterations. The central message is that initializing with smaller norm z0 tends to yield natural (smaller latent representations) recoveries. This is one explanation as to why in compressed sensing, one is able to obtain the true solution out of the affine space of solutions without penalizing the unnaturalness of the recoveries.\nWe now present visual recovery results on test images from the CelebA dataset under varying number of measurements in compressed sesing. We compare recoveries under Glow prior, DCGAN prior, LASSO-DCT, and LASSO-WVT.\nC.1 COMPRESSED SENSING ON FLOWER AND BIRD DATASET\nWe also performed compressed sensing experiments similar to those on CelebA dataset above on Birds dataset, and Flowers dataset. We trained a Glow invertible network for each dataset, and present below the quantitative and qualitative recoveries for each dataset.\nC.2 COMPRESSED SENSING ON OUT OF DISTRIBUTION IMAGES\nLack of representation error in invertible nets leads us to an important and interesting question: does the trained network fit related natural images that are underrepresented or even unrepresented in the training dataset? Specifically, can a Glow network trained on CelebA faces be a good prior on other faces; for example, those with dark-skin tone, faces with glasses or facial hair, or even animated faces? In general, our experiments show that Glow prior has an excellent performance on such out-of-distribution images that are semantically similar to celebrity faces but not representative of the CelebA dataset. In particular, we have been able to recover faces of darker skin tone, older people with beards, eastern women, men with hats, and animated characters such as Shrek, from compressed measurements under the Glow prior. Recoveries under the Glow prior convincingly beat the DCGAN prior, which shows a definite bias due to training. Not only that, the Glow prior also outperforms unbiased methods such as LASSO-DCT, and LASSO-WVT.\nCan we expect the Glow prior to continue to be an effective proxy for arbitrarily out-of-distribution images? To answer this question, we tested arbitrary natural images such as car, house door, and butterfly wings that are semantically unrelated to CelebA images. In general, we found that Glow is an effective prior at compressed sensing of out-of-distribution natural images, which are assigned a high likelihood score (small normed latent representations). On these images, Glow also outperforms LASSO.\nRecoveries of natural images that are assigned very low-likelihood scores by the Glow model generally run into instability issues. During training, invertible nets learn to assign high likelihood scores to the training images. All the network parameters such as scaling in the coupling layers of Glow network are learned to behave stably with such high likelihood representations. However, on very low-likelihood representations, unseen during the training process, the networks becomes unstable and outputs of network begin to diverge to very large values; this may be due to several reasons, such as normalization (scaling) layers not being tuned to the unseen representations. An LBFGS search for the solution of an inverse problem to recover a low-likelihood image leads the iterates into neighborhoods of low-likelihood representations that may lead the network to instability.\nWe find that Glow network has the tendency to assign higher likelihood scores to arbitrarily outof-distribution natural images. This means that invertible networks have at least partially learned something more general about natural images from CelebA dataset — may be some high level features that face images share with other natural images such as smooth regions followed by discontinuities, etc. This allows Glow prior to extend its effectiveness as a prior to other natural images beyond just the training set.\nFigure 42, 43 , 44, 45, and 46 compare the performance of LASSO-DCT, LASSO-WVT, DCGAN prior, and Glow prior on the compressed sensing of out-of-distribution images under varying number of measurements.\nD IMAGE INPAINITING\nOur experiments with inpainting reveal a similar story as with compressed sensing. Compared to DCGAN, the recovered PSNRs using Glow prior are much higher under appropriate γ as depicted in the right panel in Figure 47. If improperly initialized, then performance for γ = 0 could be poor. Even if improperly initialized, sufficiently large γ leads to higher PSNRs.\nAs with compressive sensing, if the initialization is from a small latent variable, then the empirical risk formulation with γ = 0 exhibits high PSNRs. Algorithmic regularization is again occurring due to the small latent variable initialization.\nWe present here qualitative results on image inpainting under the DCGAN prior, and the Glow prior on the CelebA test set. Compared to DCGAN, the reconstructions from Glow are of noticeably higher visual quality.\nD.1 IMAGE INPAINTING ON OUT OF DISTRIBUTION IMAGES\nWe now perform image inpainiting under Glow prior, and DCGAN prior each trained on CelebA. Figure 49 shows the visuals of out-of-distribution inpainiting. As before, DCGAN continues to suffer due to representation limits and data bias while Glow achieves reasonable reconstructions on out-of-distribution images semantically similar to CelebA faces. As one deviates to other natural images such as houses, doors, and butterfly wings, the inpainting performance deteriorates. At compressed sensing, Glow performed much better on such arbitrarily out-of-distribution images as good recoveries there only require the network only to assign a higher likelihood score to the true\nimage compared to the all the candidate static images given by the null space of the measurement operator." }, { "heading": "E DISCUSSION", "text": "Figure 50 confirms the intuition brought up in the Discussion Section of the main paper that trained Glow network assigns lower likelihoods (larger latent representations) to noisy images. Histograms show that noisy images are generally occupy the less likelihood regimes or equivalently, the larger norm latent representations.\nOur experiments verify that natural images have smaller latent representations than unnatural images. Here we also show that adding noise to natural images increases the norm of their latent representations, and that higher noise levels result in larger increases. Additionally we provide evidence that random perturbations in image space induce larger changes in z than comparable natural perturbations in image space. Figure 51 shows a plot of the norm of the change in image space, averaged over 100 test images, as a function of the size of a perturbation in latent space. Natural directions are given by the interpolation between the latent representation of two test images. For the denoising problem, this difference in sensitivity indicates that the optimization algorithm might obtain a larger decrease in ‖z‖ by an image modification that reduces unnatural image components than by a correspondingly large modification in a natural direction." }, { "heading": "F LOSS LANDSCAPE: DCGAN VS. GLOW", "text": "In Figure 52, we plot ‖y−AG(z∗+αδv + βδw)‖2 versus (α, β) where δv and δw are scaled to have the same norm as z∗, the latent representation of a fixed test image. For DCGAN, we plot the loss landscape versus two pairs of random directions. For Glow, we plot the loss landscape versus a pair of random directions and a pair of directions that linearly interpolate in latent space between z∗ and another test image.\nG IMAGE AND LATENT SPACE FORMULATIONS\nAs mentioned in the main paper, a natural formulation of the inverse problem is\nmin x∈Rn ‖Ax− y‖2 − γ log pG(x), (3)\nwhere pG(x) is the target density. We instead formulate the inverse problem as\nmin z∈Rn ‖AG(z)− y‖22 + γ‖z‖2; (4)\na measurement misfit combined with a Gaussian prior on the latent space.\nWe will denote the target distribution by pG(x) and the latent Gaussian distribution by p(z). To illustrate the differences between equation 3 and equation 4, we train a Real-NVP model Dinh et al. (2016) on a synthetic two-dimensional dataset, visualize both the log pG(x) and log p(z) in latent and image space, and solve a simple compressive sensing recovery problem. Our two dimensional data points are generated by sampling the first coordinate x1 from a bimodel Gaussian distribution and the second coordinate x2 from a uniform distribution as shown in Figure 53.\nFor comparison, we plot the x-likelihood versus x (left), latent z-likelihood versus x (middle), and x-likelihood versus z (right) in Figure 54. These plots illustrate that generally high-likelihood x points are also given higher latent z-likelihood, however, some low x-likelihood might be assigned a higher Gaussian z-likelihood; these are, for example, the points living on the darker contour spearing through the Gaussian bowl in the right plot. Figure 55 shows some of the points in the x-likelihood (left) that map to this contour in the z-space (right).\nG.1 COMPRESSIVE SENSING IN 2D\nTo compare latent-space formulation equation 4 and data-space formulation equation 3, we construct a simple compressive sensing recovery problem for this two-dimensional data and illustrate the\ndifference under both good and bad initializations. Specifically, we want to recover a vector x = [x1 x2]\nT from a single linear measurement y = 〈a, x〉 = x2, where a = [0 1]T. Figure 56 shows the gradient descent path, and final solution, while solving equation 4 (left column), and equation 3 (right column) from a good and a bad initialization. x-likelihood formulation seems more robust to a bad initialization in this case compared to z-likelihood as z-likelihood might not be a good proxy for x-likelihood for some points. This bad case is carefully crafted to illustrate the difference between the two formulations, however, in practice, it seems unlikely that a low x-likelihood points that somehow achieves higher z-likelihood will also obey the measurement constraints.\nG.2 COMPRESSIVE SENSING FOR CELEBA\nIn case of CelebA images, we found that optimizing over direct likelihood of images proved very hard to tune. To better understand why equation 4 is easier compared to equation 3, we draw the landscape of the loss surfaces of equation 4 versus z and equation 3 versus x under different γ in two random directions around an the ground truth in z, or x, as appropriate; see Figure 57. In the x-formulation the loss surfaces (first row) have a sharp dip at the ground truths, which comes from − log pG(x) term. We believe that sharp dip in the loss surface makes it difficult to tune the γ parameter, the learning rate, and makes the optimization using equation 3 numerically more challenging as observed in our experiments. On the other hand, the loss surfaces for equation 4 (second row) appear smoother.\nWe now show a quantitative comparison of the x-likelihood formulation in equation 3, and zlikelihood formulation in equation 4 on compressive sensing for CelebA test images versus m for fixed values of γ; see Figure 58. We initialize with z0 = 0, and x0 = G(z0), as appropriate. We simply choose γ = 0 in equation 4. However, we need to choose γ more carefully in equation 3, and different values of γ are appropriate across different undersampling ratios. Even if one ignores the difficulty of choosing the hyperparameter γ, the formulation in equation 4 generally performs much better than equation 3 as evident from the plots.\nTo show the effect of noise on recovery in compressive sensing under different values of γ and noise levels, we plot PSNR of the iterates when solving equation 4 against iterations in Figure 59. This plot shows, perhaps surprisingly, that even under noisy compressed measurements it is a good idea to solve the inverse compressed sensing problem equation 4 with γ = 0.\nG.3 DENOISING FOR CELEBA\nFor completeness, we also compare denoising using our latent space formulation equation 4, our image space forumation equation 3 under different noise levels σ = 0.05 and σ = 0.10; see Figure 60 and Figure 61 respectively. For both noise levels, we observe equal performance (indicated by the highest PSNR) when optimizing in the latent or image space. We do not report results over σ = 0.20 as it was hard to tune hyper paremeters for higher noise levels in equation 3." } ]
2,019
null
SP:0d872fb4321f3a4a3fc61cf4d33b0c7e33f2d695
[ "This paper presents deep symbolic regression (DSR), which uses a recurrent neural network to learn a distribution over mathematical expressions and uses policy gradient to train the RNN for generating desired expressions given a set of points. The RNN model is used to sample expressions from the learned distribution, which are then instantiated into corresponding trees and evaluated on a dataset. The fitness on the dataset is used as the reward to train the RNN using policy gradient. In comparison to GP, the presented DSR approach recovers exact symbolic expressions in majority of the benchmarks.", "This paper presents a RNN-RL based method for the symbolic regression problem. The problem is new (to Deep RL) and interesting. My main concern is about the proposed method, where the three RL related equations (not numbered) at page 5 are also direct copy-from-textbook policy gradient equations without specific adaptation to the new application considered in this paper, which is very strange. The two conditional probability definitions considered at page 3 are not mentioned in later text. These are only fractions of the underlying method and by reading the paper back and forth several times, it is not clear of the basic algorithmic flowchart, let alone more detailed description of the related parameters. Without these information, it is impossible to have a fair judge of the novelty and feasibility of the proposed method. The empirical results are also limited in small dataset, which makes it hard to verify the generality of the superior claim." ]
Discovering the underlying mathematical expressions describing a dataset is a core challenge for artificial intelligence. This is the problem of symbolic regression. Despite recent advances in training neural networks to solve complex tasks, deep learning approaches to symbolic regression are lacking. We propose a framework that combines deep learning with symbolic regression via a simple idea: use a large model to search the space of small models. More specifically, we use a recurrent neural network to emit a distribution over tractable mathematical expressions, and employ reinforcement learning to train the network to generate better-fitting expressions. Our algorithm significantly outperforms standard genetic programming-based symbolic regression in its ability to exactly recover symbolic expressions on a series of benchmark problems, both with and without added noise. More broadly, our contributions include a framework that can be applied to optimize hierarchical, variable-length objects under a black-box performance metric, with the ability to incorporate a priori constraints in situ.
[]
[ { "authors": [ "Thomas Bäck", "David B Fogel", "Zbigniew Michalewicz" ], "title": "Evolutionary Computation 1: Basic Algorithms and Operators", "venue": "CRC press,", "year": 2018 }, { "authors": [ "Irwan Bello", "Barret Zoph", "Vijay Vasudevan", "Quoc V Le" ], "title": "Neural optimizer search with reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Steven L Brunton", "Joshua L Proctor", "J Nathan Kutz" ], "title": "Discovering governing equations from data by sparse identification of nonlinear dynamical systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "BGW Craenen", "AE Eiben", "E Marchiori" ], "title": "How to handle constraints with evolutionary algorithms", "venue": "Practical Handbook Of Genetic Algorithms: Applications,", "year": 2001 }, { "authors": [ "Roger Fletcher" ], "title": "Practical Methods of Optimization", "venue": null, "year": 2013 }, { "authors": [ "Félix-Antoine Fortin", "François-Michel De Rainville", "Marc-André Gardner", "Marc Parizeau", "Christian Gagné" ], "title": "Deap: Evolutionary algorithms made easy", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "John R Koza" ], "title": "Genetic Programming: On the Programming of Computers by Means of Natural Selection, volume 1", "venue": "MIT press,", "year": 1992 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Xin Li", "Chi Zhou", "Weimin Xiao", "Peter C Nelson" ], "title": "Prefix gene expression programming", "venue": null, "year": 2005 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine" ], "title": "Epopt: Learning robust neural network policies using model ensembles", "venue": "arXiv preprint arXiv:1610.01283,", "year": 2016 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Silviu-Marian Udrescu", "Max Tegmark. Ai" ], "title": "feynman: a physics-inspired method for symbolic regression", "venue": "arXiv preprint arXiv:1905.11481,", "year": 2019 }, { "authors": [ "Nguyen Quang Uy", "Nguyen Xuan Hoai", "Michael O’Neill", "Robert I McKay", "Edgar Galván-López" ], "title": "Semantically-based crossover in genetic programming: application to real-valued symbolic regression", "venue": "Genetic Programming and Evolvable Machines,", "year": 2011 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ekaterina J Vladislavleva", "Guido F Smits", "Dick Den Hertog" ], "title": "Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2008 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "arXiv preprint arXiv:1802.08773,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Understanding the mathematical relationships among variables in a physical system is an integral component of the scientific process. Symbolic regression aims to identify these relationships by searching over the space of tractable mathematical expressions to best fit a dataset. Specifically, given a dataset of (X, y) pairs, where X ∈ Rn and y ∈ R, symbolic regression aims to identify a function f(X) : Rn → R that minimizes a distance metric D(y, f(X)) between real and predicted values. That is, symbolic regression seeks to find the optimal f? = argminf D (y, f(X)), where the functional form of f is a tractable expression.\nThe resulting expression f? may be readily interpretable and/or provide useful scientific insights simply by inspection. In contrast, conventional regression imposes a single model structure that is fixed during training, often chosen to be expressive (e.g. a neural network) at the expense of being easily interpretable. However, the space of mathematical expressions is discrete (in model structure) and continuous (in model parameters), growing exponentially with the length of the expression, rendering symbolic regression an extremely challenging machine learning problem.\nGiven the large and combinatorial search space, traditional approaches to symbolic regression typically utilize evolutionary algorithms, especially genetic programming (GP) (Koza, 1992; Bäck et al., 2018). In GP-based symbolic regression, a population of mathematical expressions is “evolved” using evolutionary operations like selection, crossover, and mutation to improve a fitness function. While GP can be effective, it is also known to scale poorly to larger problems and to exhibit high sensitivity to hyperparameters.\nDeep learning has permeated almost all areas of artificial intelligence, from computer vision (Krizhevsky et al., 2012) to optimal control (Mnih et al., 2015). However, deep learning may seem incongruous with or even antithetical toward symbolic regression, given that neural networks are typically highly complex, difficult to interpret, and rely on gradient information. We propose a framework that resolves this incongruity by tying deep learning and symbolic regression together with a simple idea: use a large model (i.e. neural\nnetwork) to search the space of small models (i.e. symbolic expressions). This framework leverages the representational capacity of neural networks while entirely bypassing the need to interpret a network.\nWe present deep symbolic regression (DSR), a gradient-based approach for symbolic regression based on reinforcement learning. In DSR, a recurrent neural network (RNN) emits a distribution over mathematical expressions. Expressions are sampled from the distribution, instantiated, and evaluated based on their fitness to the dataset. This fitness is used as the reward signal to train the RNN parameters using a policy gradient algorithm. As training proceeds, the RNN adjusts the likelihood of an expression relative to its reward, assigning higher probabilities to better fitting expressions.\nWe demonstrate that DSR outperforms a standard GP implementation in its ability to recover exact symbolic expressions from data, both with and without added noise. We summarize our contributions as follows: 1) a novel method for solving symbolic regression that outperforms standard GP, 2) an autoregressive generative modeling framework for optimizing hierarchical, variable-length objects, 3) a framework that accommodates in situ constraints, and 4) a novel risk-seeking strategy that optimizes for best-case performance." }, { "heading": "2 RELATED WORK", "text": "Symbolic regression. Symbolic regression has a long history of evolutionary strategies, especially GP (Koza, 1992; Bäck et al., 2018; Uy et al., 2011). Among non-evolutionary approaches, the recent AI Feynman algorithm (Udrescu & Tegmark, 2019) is a multi-staged approach to symbolic regression leveraging the observation that physical equations often exhibit simplifying properties like multiplicative separability and translational symmetry. The algorithm identifies and exploits such properties to recursively define simplified sub-problems that can eventually be solved using simple techniques like a polynomial fit or small brute force search. Brunton et al. (2016) develop a sparse regression approach to recover nonlinear dynamics equations from data; however, their search space is limited to linear combinations of a library of basis functions.\nAutoML. Our framework has many parallels to a body of works within automated machine learning (AutoML) that use an autoregressive RNN to define a distribution over discrete objects and use reinforcement learning to optimize this distribution under a black-box performance metric (Zoph & Le, 2017; Ramachandran et al., 2017; Bello et al., 2017). The key methodological difference to our framework is that these works optimize objects that are both sequential and fixed length. For example, in neural architecture search (Zoph & Le, 2017), an RNN searches the space of neural network architectures, which are encoded by a sequence of discrete “tokens” specifying architectural properties (e.g. number of neurons) of each layer. The length of the sequence is fixed or scheduled during training. In contrast, a major contribution of our framework is defining a search space that is both inherently hierarchical and variable length.\nThe most similar AutoML work searches for neural network activation functions (Ramachandran et al., 2017). While the space of activation functions is hierarchical in nature, the authors (rightfully) constrain this space substantially by positing a functional unit that is repeated sequentially, thus restricting their search space back to a fixed-length sequence. This constraint is well-justified for learning activation functions, which tend to exhibit similar hierarchical structures. However, a repeating-unit constraint is not practical for symbolic regression because the ground truth expression may have arbitrary structure.\nAutoregressive models. The RNN-based distribution over expressions used in DSR is autoregressive, meaning each token is conditioned on the previously sampled tokens. Autoregressive models have proven to be useful for audio and image data (Oord et al., 2016a;b) in addition to the AutoML works discussed above; we further demonstrate their efficacy for hierarchical expressions.\nGraphRNN defines a distribution over graphs that generates an adjacency matrix one column at a time in autoregressive fashion (You et al., 2018). In principle, we could have constrained GraphRNN to define the distribution over expressions, since trees are a special case of graphs. However, GraphRNN constructs\ngraphs breadth-first, whereas expressions are more naturally represented using depth-first traversals (Li et al., 2005). Further, DSR exploits the hierarchical nature of trees by providing the parent and sibling as inputs to the RNN, and leverages the additional structure of expression trees that a node’s value determines its number of children (e.g. cosine is a unary node)." }, { "heading": "3 METHODS", "text": "Our overall approach involves representing mathematical expressions by the pre-order traversals of their corresponding symbolic expression trees, developing an autoregressive model to generate expression trees under a pre-specified set of constraints, and using reinforcement learning to train the model to generate better-fitting expressions." }, { "heading": "3.1 GENERATING EXPRESSIONS WITH A RECURRENT NEURAL NETWORK", "text": "We leverage the fact that algebraic expressions can be represented using symbolic expression trees, a type of binary tree in which nodes map to mathematical operators, input variables, or constants. Operators are internal nodes and may be unary (e.g. sine) or binary (e.g. multiply). Input variables and constants are terminal nodes. We encode an expression τ by the pre-order traversal (i.e. depth-first, then left-to-right) of its corresponding expression tree.1 We denote the ith node in the traversal as τi and the length of the traversal as |τ | = T . Each node has a value within a given library L of possible node values or “tokens,” e.g. {+,−,×,÷, sin, cos, x}. Expressions are generated one node at a time along the pre-order traversal (from τ1 to τT ). For each node, a categorical distribution with parameters ψ defines the probabilities of selecting each node value from L. To capture the “context” of the expression as it is being generated, we condition this probability upon the selections of all previous nodes in that traversal. This conditional dependence can be achieved very generally using an RNN with parameters θ that outputs a probability vector ψ in autoregressive manner.\nSpecifically, the ith output vector ψ(i) of the RNN defines the probability distribution for selecting the ith node value τi, conditioned on the previously selected node values τ1:(i−1):\np(τi|τ1:(i−1); θ) = ψ (i) L(τi),\nwhere L(τi) is the index in L corresponding to node value τi. The likelihood of the sampled expression is computed using the chain rule of conditional probability:\np(τ |θ) = |τ |∏ i=1 p(τi|τ1:(i−1); θ) = |τ |∏ i=1 ψ (i) L(τi)\nThe sampling process is illustrated in Figure 1 and described in Algorithm 1. Additional algorithmic details of the sampling process are described in Subroutines 1 and 2 in Appendix A. Starting at the root node, a node value is sampled according to ψ(1). Subsequent node values are sampled autoregressively in a depth-first, left-to-right manner until the tree is complete (i.e. all tree branches reach terminal nodes). The resulting sequence of node values is the tree’s pre-order traversal, which can be used to reconstruct the tree2 and its\n1Given an expression tree (or equivalently, its pre-order traversal), the corresponding mathematical expression is unique; however, given an expression, its expression tree (or its corresponding traversal) is not unique. For example, x2 and x · x are equivalent expressions but yield different trees. For simplicity, we use τ somewhat abusively to refer to an expression where it technically refers to an expression tree (or equivalently, its corresponding traversal).\n2In general, a pre-order traversal is insufficient to uniquely reconstruct the tree. However, in this context, we know how many child nodes each node has based on its value, e.g. “multiply” is a binary operator and thus has two children.\ncorresponding expression. Note that different samples of the distribution have different tree structures of different size. Thus, the search space is inherently both hierarchical and variable length.\nProviding hierarchical inputs to the RNN. Naively, the input to the RNN when sampling τi would be a representation (i.e. embedding or one-hot encoding) of the previously sampled token, τi−1. Indeed, this is typical in related autoregressive models, e.g. when generating sentences (Vaswani et al., 2017) or for neural architecture search (Zoph & Le, 2017). However, the search space for symbolic regression is inherently hierarchical, and the previously sampled token may actually be very distant from the next token to be sampled in the expression tree. For example, the fifth and sixth tokens sampled in Figure 1 are adjacent nodes in the traversal but are four edges apart in the expression tree. To better capture hierarchical information, we provide as inputs to the RNN a representation of the parent and sibling node of the token being sampled. We introduce an empty token for cases in which a node does not have a parent or sibling. Pseudocode for identifying the parent and sibling nodes given a partial traversal is provided in Subroutine 2 in Appendix A.\nConstraining the search space. Under our framework, it is straightforward to apply a priori constraints to reduce the search space. To demonstrate, we impose several simple, domain-agnostic constraints: (1) Expressions are limited to a pre-specified minimum and maximum length. We selected minimum length of 2 to prevent trivial expressions and a maximum length of 30 to ensure expressions are tractable. (2) The children of an operator should not all be constants, as the result would simply be a different constant. (3) The child of a unary operator should not be the inverse of that operator, e.g. log(exp(x)) is not allowed. (4) Direct descendants of trigonometric operators should not be trigonometric operators, e.g. sin(x + cos(x)) is not allowed because cosine is a descendant of sine. While still semantically meaningful, such composed trigonometric operators do not appear in virtually any scientific domain.\nWe apply these constraints in situ (concurrently with autoregressive sampling) by zeroing out the probabilities of selecting tokens that would violate a constraint. Pseudocode for this process is provided in Subroutine\nFor domains without this property, the number of children can be sampled from an additional RNN output. A pre-order traversal plus the corresponding number of children for each node is sufficient to uniquely reconstruct the tree.\nAlgorithm 1: Sampling an expression from the RNN 1 function SampleExpression(θ,L)\ninput : RNN with parameters θ; library of tokens L output: Pre-order traversal τ of an expression sampled from the RNN\n2 τ = [] // Empty list 3 x = empty‖empty // Initial RNN input is empty parent and sibling 4 h0 = ~0 // Initialize RNN cell state to zero 5 for i = 1, . . . , T do 6 (ψ(i), hi) = RNN(x, hi−1; θ) 7 ψ(i) ← ApplyConstraints(ψ(i),L, τ) // Adjust probabilities 8 τi = Categorical(ψ(i)) // Sample next token 9 τ ← τ‖τi // Append token to traversal\n10 if ExpressionComplete(τ) then 11 return τ 12 x← ParentSibling(τ) // Compute next parent and sibling 13 end 14 return τ\nAlgorithm 2: Deep symbolic regression 1 function DSR(α,N,L, X, y)\ninput : learning rate α; batch size N ; library of tokens L; input dataset (X, y) output: Best fitting expression τ?\n2 Initialize RNN with parameters θ, defining distribution over expressions p(·|θ) 3 τ? = null 4 b = 0 5 repeat 6 T = {τ (i) ∼ p(·|θ)}i=1:N // Sample expressions (Algorithm 1) 7 T ← {OptimizeConstants(τ (i), X, y)}i=1:N // Optimize constants 8 R = {R(τ (i))− λCC(τ (i))}i=1:N // Compute rewards 9 ĝ = 1N ∑N i=1R(τ\n(i))∇θ log p(τ (i)|θ) // Compute policy gradient 10 θ ← θ + α(ĝ1 + ĝ2) // Apply gradients 11 if maxR > R(τ?) then τ? ← τ (argmaxR) // Update best expression 12 return τ?\n1 in Appendix A. This process ensures that all samples adhere to all constraints, without rejecting samples post hoc. In contrast, imposing constraints in GP-based symbolic regression can be problematic (Craenen et al., 2001). In practice, evolutionary operations that violate constraints are typically rejected post hoc (Fortin et al., 2012)." }, { "heading": "3.2 TRAINING THE RNN USING POLICY GRADIENTS", "text": "Optimizing the parameters of the sampled expressions. Once a pre-order traversal is sampled, we instantiate the corresponding symbolic expression. The expression may have several constant tokens, which can be viewed as model parameters. We train these model parameters by minimizing the mean-squared error\nwith respect to an input dataset using a nonlinear optimization algorithm, e.g. BFGS (Fletcher, 2013). We perform this inner optimization loop for each sampled expression before training the RNN.\nTraining the RNN using policy gradients. Given a distribution over mathematical expressions p(τ |θ) and a measure of performance of an expression R(τ), we consider the objective to maximize J(θ), defined as the expectation of R under expressions sampled from the distribution:\nJ(θ) ≡ Eτ∼p(τ |θ) [R(τ)]\nWe use REINFORCE (Williams, 1992) to maximize this expectation via gradient ascent:\n∇θJ(θ) = ∇θEτ∼p(τ |θ) [R(τ)] = Eτ∼p(τ |θ) [R(τ)∇θ log p(τ |θ)]\nThis result allows us to estimate the expectation using samples from the distribution. Specifically, we can obtain an unbiased estimate of∇θJ(θ) by computing the sample mean over a batch of N sampled expressions T = {τ (i)}i=1:N :\n∇θJ(θ) ≈ 1\nN N∑ i=1 R(τ (i))∇θ log p(τ (i)|θ)\nReward function. A standard fitness measure in GP-based symbolic regression is normalized root-meansquare error (NRMSE), the root-mean-square error normalized by the standard deviation of the target values,\nσy . That is, given a dataset of n number of (X, y) pairs, NRMSE = 1σy √ 1 n ∑n i=1(yi − ŷi)2, where ŷ = f(X) are the predicted values computed using the candidate expression f . Normalization by σy makes the metric commensurate across different datasets with potentially different ranges. However, metrics based on mean-square error exhibit extraordinarily large values for some expressions, e.g. an expression that incorrectly divides by an input variable with values near zero. For a gradient-based approach like DSR, this results in the gradient being dominated by the worst expressions, which can lead to instability. We found that a bounded reward function is more stable; thus, we applied a squashing function, yielding the reward function R(τ) = 1/(1 + NRMSE).3\nWe introduce the “vanilla” version of DSR in Algorithm 2. Below we describe several simple extensions.\nReward baseline. The above approximation to∇θJ(θ) is an unbiased gradient estimate, but in practice has high variance. To reduce variance, we include a baseline function b:\n∇θJ(θ) ≈ 1\nN N∑ i=1 [ R(τ (i))− b ] ∇θ log p(τ (i)|θ)\nAs long as the baseline is not a function of the current batch of expressions, the gradient estimate is still unbiased. We define the baseline function as an exponentially-weighted moving average of batches of rewards. Intuitively, the gradient step increases the likelihood of expressions above the baseline and decreases the likelihood of expressions below the baseline.\nComplexity penalty. We include an optional complexity penalty that is added to the reward function. For simplicity, we consider the complexity metric |τ |, i.e. the number of nodes in the expression tree. More complicated metrics have been proposed that capture hierarchical features of the tree and/or deduced properties of the resulting expression (Vladislavleva et al., 2008).\n3Since GP-based approaches using tournament selection only rely on the rankings of the fitness measure within the population, large fitness values are not problematic. SinceR(τ) is monotonic in NRMSE, GP is unaffected by squashing.\nAlgorithm 3: Deep symbolic regression with baseline, risk-seeking, entropy bonus, and complexity penalty\n1 function DSR(α, β, λC , λH, ,N,L, X, y) input : learning rate α; moving average coefficient β; complexity coefficient λC ; entropy\ncoefficient λH; risk factor ; batch size N ; library of tokens L; input dataset (X, y) output: Best fitting expression τ?\n2 Initialize RNN with parameters θ, defining distribution over expressions p(·|θ) 3 τ? = null 4 b = 0 5 repeat 6 T = {τ (i) ∼ p(·|θ)}i=1:N // Sample expressions (Algorithm 1) 7 T ← {OptimizeConstants(τ (i), X, y)}i=1:N // Optimize constants 8 R = {R(τ (i))− λCC(τ (i))}i=1:N // Compute rewards 9 R = (1− ) percentile ofR // Compute threshold\n10 T ← {τ (i) : R(τ (i)) ≥ R } // Select subset of expressions 11 R ← {R(τ (i)) : R(τ (i)) ≥ R } // Select subset of rewards 12 ĝ1 = ReduceMean((R− b)∇θ log p(T |θ)) // Compute policy gradient 13 ĝ2 = ReduceMean(λH∇θH(T |θ)) // Compute entropy gradient 14 θ ← θ + α(ĝ1 + ĝ2) // Apply gradients 15 b← β · ReduceMean(R) + (1− β)b // Update baseline 16 if maxR > R(τ?) then τ? ← τ (argmaxR) // Update best expression 17 return τ?\nEntropy bonus. We provide a bonus to the loss function proportional to the entropy of the sampled expressions. In accordance with the maximum entropy reinforcement learning framework (Haarnoja et al., 2018), this bonus serves two purposes. First, it encourages the RNN to explore more expressions, preventing premature convergence to a local optimum. In practice, this often leads to a better end result. Second, it encourages the RNN to assign equal likelihood to different expressions that have equal fitness.\nRisk-seeking. The policy performance, J , is defined as an expectation. However, in practice, the performance of symbolic regression is measured by the single or few best expressions. Thus, we employ a novel risk-seeking technique in which only the top percentile samples from each batch are used in the gradient computation. This has the effect of increasing best-case performance at the expense of lower worst-case and average performances. This process is essentially the opposite of the EpOpt technique (Rajeswaran et al., 2016) used for risk-averse reinforcement learning, in which only the bottom percentile samples from each batch are used.\nThe complete algorithm, including reward baseline, complexity penalty, entropy bonus, and risk-seeking, is shown in Algorithm 3." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "Evaluating DSR. We evaluated DSR on a set of 12 commonly used symbolic regression benchmarks (Uy et al., 2011), as well as 4 additional variants in which we introduced real-valued constants to demonstrate the inner optimization loop. Each benchmark is defined by a ground truth expression, a training and testing dataset, and set of allowed operators, described in Table 2 in Appendix A. The training data is used to compute the reward for each candidate expression, the test data is used to evaluate the best found candidate\nexpression at the end of training, and the ground truth function is used to determine whether the best found candidate expression was correctly recovered.\nAs a baseline, we compared against standard GP-based symbolic regression. To ensure fair comparison, the same constant optimizer (BFGS) was used for both methods. We ran independent training runs for GP and DSR for each benchmark expression (n = 100 for benchmarks without constants; n = 10 for benchmarks with constants). For each experiment, we generated 1,000 candidate expressions per generation/iteration for 1,000 generations/iterations, resulting in 1,000,000 total expressions. For each training run, the expression with the best reward is selected and we record the NRMSE on the test data.\nFor GP, we used the open-source software package “deap” (Fortin et al., 2012). For DSR, the RNN comprised a single-layer LSTM of 32 hidden units. Additional hyperparameters and experiment details are provided in Appendix A.\nIn Table 1, we report the percentage of runs that correctly recover the expression and NRMSE on the test data for each benchmark. DSR significantly outperforms GP in its ability to exactly recover benchmark expressions. DSR also outperforms GP in the average NRMSE across all expressions; however, we observe that for the few expressions with low or zero recovery rate (e.g. Nguyen-7, Nguyen-8, and Nguyen-12), GP sometimes exhibits lower NRMSE. One explanation is that GP is more prone to overfitting the expression to the dataset. As an evolutionary approach, GP directly modifies the previous generation’s expressions, allowing it to make small “corrections” that decrease error each generation even if the functional form is far from correct. In contrast, in DSR the RNN “rewrites” each expression from scratch each iteration after learning from a gradient update, making it less prone to overfitting.\nSurprisingly, DSR consistently performed best without a complexity penalty, i.e. λC = 0. Due to the autoregressive nature of the RNN, shorter expressions tend to exhibit higher likelihood than longer ones. We postulate that this property produces a self-regularization effect that precludes the need for an explicit complexity penalty.\nAblation studies. Algorithm 3 includes several additional components relative to the “vanilla” Algorithm 2. We performed a series of ablation studies to quantify the effect of each of these components, along with the effects of the various constraints on the search space, and including the parent and sibling as input to the RNN instead of the previous node value. In Figure 2, we performed DSR on the set of 12 Nguyen benchmarks for each ablation. DSR is still competitive with GP even when removing all improvements and all constraints.\nNoisy data and amount of data. We evaluated the robustness of DSR to noisy data by adding independent Gaussian noise to the dependent variable, with mean zero and standard deviation proportional to the root-mean-square of the dependent variable in the training data. In Figure 3, we varied the proportionality constant from 0 (noiseless) to 10−1 and compared the performance of GP and DSR across the set of 12 Nguyen benchmarks. DSR still outperforms GP in both recovery rate and NRMSE across noise levels.\nSymbolic regression excels in the low-data setting when data is noiseless, hence, the benchmark expressions included herein include only 20 data points (see Table 2). With added noise, increasing the amount of data\nsmooths the reward function and may help prevent overfitting. Thus, we repeated the noise experiments using the same benchmarks but with 10-fold larger training datasets (200 points data points). As expected, recovery rates tend to increase for both methods; however, DSR maintains a much larger improvement than GP at higher noise levels." }, { "heading": "5 CONCLUSION", "text": "We introduce an unconventional approach to symbolic regression based on reinforcement learning that outperforms a standard GP-based method on recovering exact expressions on benchmark problems, both with and without added noise. Since both DSR and GP generate expression trees, there are many opportunities for hybrid methods, for example including several generations of evolutionary operations in the inner optimization loop. From the perspective of AutoML, the main contributions are defining a flexible distribution over hierarchical, variable-length objects that allows imposing in situ constraints, and using risk-seeking training to optimize best-case performance. Thus, we note that our framework is easily extensible to domains outside symbolic regression, which we save for future work; for example, searching the space of organic molecular structures for high binding affinity to a reference compound. We chose symbolic regression to demonstrate our framework in part because of the large search space, broad applicability, computationally expedient inner optimization loop (sub-second), and availability of vetted benchmark problems and baseline methods." }, { "heading": "APPENDIX A", "text": "Hyperparameters. DSR hyperparameters are listed in Table 3. GP hyperparameters are listed in Table 4. The same hyperparameters were used for all experiments and all benchmark expressions.\nAdditional details for performance comparison experiments. Details of the benchmark symbolic regression problems are shown in Table 2. All benchmarks use the function set {+,−,×,÷, sin, cos, exp, log}. To ensure closures, we use protected versions of operators: log returns the logarithm of the absolute value of its argument, and ÷, exp, and log return 1 for arguments that would cause overflow or other errors. Benchmarks without constants can be recovered exactly, thus recovery is defined by exact correctness (modulo floating point precision error). Note Nguyen-8 can be recovered via exp( xx+x log(x)) and Nguyen-11 can be recovered via exp(y log(x)).\nTable 3: DSR hyperparameters\nParameter Value Batch size 1,000 Iterations 1,000\nLearning rate (α) 0.0003 Entropy coefficient (λH) 0.08\nComplexity coefficient (λC) 0 Moving average coefficient (β) 0.5\nRisk factor ( ) 0.1\nTable 4: GP hyperparameters\nParameter Value Population size 1,000\nGenerations 1,000 Fitness function NRMSE\nInitialization method Full Selection type Tournament Tournament size (k) 3 Crossover probability 0.5 Mutation probability 0.1\nMinimum subtree depth (dmin) 0 Maximum subtree depth (dmax) 2\nFor benchmarks with constants, constants are optimized using BFGS with an initial guess of 1.0 for each constant. We ensured that all benchmarks with constants do not get stuck in a poor local optimum when optimizing with BFGS and the candidate functional form is correct. Since floating point constants cannot be recovered exactly, for benchmarks with constants we manually determined correctness of the functional form by inspection. Since constant optimization is a computational bottleneck, we limited each expression to three constants for both DSR and GP experiments.\nFor GP, the initial population of expressions is generated using the “full” method (Koza, 1992) with depth randomly selected between dmin and dmax. The selection operator is defined by deterministic tournament selection, in which the expression with the best fitness among k randomly selected expressions is chosen. The crossover operator is defined by swapping random subtrees between two expressions. The point mutation operator is defined by replacing a random subtree with a new subtree initialized using the “full” method with depth randomly selected between dmin and dmax.\nAdditional details for ablation studies. In Figure 2, “Parent/sibling” denotes that the previous node of the traversal is provided as input to the RNN, rather than the parent and sibling node. “Risk-seeking” denotes no risk-seeking, equivalent to = 1. “Entropy bonus” denotes no entropy bonus, equivalent to λH = 0. “Reward baseline” denotes no reward baseline, equivalent to β = 0. “All improvements” denotes combining ablations for Parent/sibling, Risk, Entropy, and Baseline. “Constrain trig” denotes no constraint precluding nested trigonometric operators. “Constrain inverse” denotes no constraint precluding inverse unary operators. “Constrain min/max” denotes no constraint precluding minimum or maximum length. (If the maximum length is reached, the expression is appended with x until complete.) “Constraints” denotes combining ablations for Trig, Inverse, and Min/max. “All constraints & improvements” denotes combining all ablations.\nTraining curves. Figures 4 and 5 show the reward (1/(1 + NRMSE)) and recovery rate, respectively, as a function of training step (DSR iteration or GP generation). For benchmarks with constants, the constant optimizer can allow both algorithms to quickly reach reward near 1.0. For these benchmarks, we provide zoomed inset plots demonstrate if and when all independent training runs correctly recovery the expression (Figure 4: Constant-1, Constant-2, and Constant-3). Note that the NRMSE and recovery values in Table 1 correspond to the final point on each curve in Figures 4 and 5, respectively.\nAdditional subroutines. DSR includes several subroutines used when sampling an expression from the RNN and during training. In Subroutine 1, we describe the function ApplyConstraints(ψ,L, τ) used in Algorithm 1, which zeros out the probabilities of tokens that would violate any given constraints. Within this subroutine, the user-specific function ViolatesConstraint(τ,Li) returns TRUE if adding the ith token of the library L to the partial traversal τ would violate any user-specified constraints, and FALSE otherwise. In Subroutine 2, we describe the function ParentSibling(τ) used in Algorithm 1, which computes the parent and sibling of the next token to be sampled. This subroutine uses the following logic. If the final node in the partial traversal is a unary or binary operator, then that node is the parent and there is no sibling. Otherwise, the subroutine iterates backward through the traversal until finding a node with an unselected child node. That node is the parent and the subsequent node is the child. Within this subroutine, the function Arity(τi) simply returns the arity (number of arguments) of token τi, i.e. two for binary operators, one for unary operators, or zero for input variables and constants.\nIn Subroutine 3, we describe the function OptimizeConstants(τ,X, y) used in Algorithms 2 and 3, which optimizes the placeholder constants c of an expression with respect to input dataset (X, y) using a black-box optimizer, e.g. BFGS. Within this subroutine, the function Instantiate(τ) instantiates the symbolic expression as a function f(X; c) with inputs X and parameters (constants) c, and the function ReplaceConstants(τ, c?) replaces the placeholder constants in the expression with the optimized constants.\nSubroutine 1: Apply generic constraints in situ when sampling from the RNN 1 function ApplyConstraints(ψ,L, τ)\ninput : Categorical probabilities ψ; corresponding library of tokens L; partially sampled traversal τ output: Adjusted categorical probabilities ψ\n2 L = |ψ| // Length of library 3 for i = 1, . . . , L do 4 if ViolatesConstraint(τ,Li) then 5 ψi ← 0 // Constrain that token 6 end 7 ψ ← ψ∑\ni ψi // Normalize back to 1\n8 return ψ\nSubroutine 2: Computing parent and sibling inputs to the RNN 1 function ParentSibling(τ)\ninput : Partially sampled traversal τ output: Parent and sibling tokens of the next token to be sampled\n2 L = |τ | // Length of partial traversal 3 counter = 0 // Counter for number of unselected nodes 4 if Arity(τL) > 0 then 5 parent = τL 6 sibling = empty 7 return parent‖sibling 8 for i = L, . . . , 1 do // Iterate backward 9 counter← counter + Arity(τi)− 1\n10 if counter = 0 then 11 parent = τi 12 sibling = τi+1 13 return parent‖sibling 14 end\nSubroutine 3: Optimize the constants of an expression (inner optimization loop) 1 function OptimizeConstants(τ,X, y)\ninput : Expression τ with placeholder constants c; input dataset (X, y) output: Expression τ? with optimized constants c?\n2 f(X, c) = Instantiate(τ) // Instantiate the symbolic expression 3 c? = argminc ‖y − f(X, c)‖22 // Minimize error (e.g. with BFGS) 4 τ? = ReplaceConstants(τ, c?) // Replace placeholder constants 5 return τ?" } ]
2,019
null
SP:4706017e6f8b958c7d0825fed98b285ea2994b59
[ "This paper proposes a new pointwise convolution layer, which is non-parametric and can be efficient thanks to the fast conventional transforms. Specifically, it could use either DCT or DHWT to do the transforming job and explores the optimal block structure to use this new kind of PC layer. Extensive experimental studies are provided to verify the new PC layer and experimental results show that the new layer could reduce the parameters and FLOPs while not loosing accuracy.", "This paper presents a new pointwise convolution (PC) method which applies conventional transforms such as DWHT and DCT. The proposed method aims to reduce the computational complexity of CNNs without degrading the performance. Compared with the original PC layer, the DWHT/DCT-based methods do not require any learnable parameters and reduce the floating-point operations. The paper also empirically optimizes the networks by removing ReLU after the proposed PC layers and using conventional transforms for high-level features extraction. Experiments on CIFAR100 show that the DWHT-based model improves the accuracy and reduces parameters and FLOPs compared with MobileNet-V1." ]
Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs. This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy degradation. Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads. In addition, its fast algorithm further reduces complexity of floating point addition from O(n) to O(n log n). These non-parametric and low computational properties construct extremely efficient networks in terms of the number of parameters and operations, enjoying accuracy gain. Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 49.4% reduced FLOPs compared with its baseline model (MoblieNet-V1) on the CIFAR 100 dataset.
[]
[ { "authors": [ "Alfredo Canziani", "Adam Paszke", "Eugenio Culurciello" ], "title": "An analysis of deep neural network models for practical applications", "venue": "CoRR, abs/1605.07678,", "year": 2016 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio" ], "title": "Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1", "venue": "CoRR, abs/1602.02830,", "year": 2016 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "CoRR, abs/1511.00363,", "year": 2015 }, { "authors": [ "Saeed Dabbaghchian", "Masoumeh P Ghaemmaghami", "Ali Aghagolzadeh" ], "title": "Feature extraction using discrete cosine transform and discrimination power analysis with a face recognition technology", "venue": "Pattern Recognition,", "year": 2010 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "G.D.S.L.P. G" ], "title": "Walshhadamard transform kernel-based feature vector for shot boundary detection", "venue": "IEEE Transactions on Image Processing,", "year": 2014 }, { "authors": [ "Arthita Ghosh", "Rama Chellappa" ], "title": "Deep feature extraction in the dct domain", "venue": "In 2016 23rd International Conference on Pattern Recognition (ICPR),", "year": 2016 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "M Hassan", "I Osman", "M Yahia" ], "title": "Walsh-hadamard transform for facial feature extraction in face recognition", "venue": "World Academy of Science, Engineering and Technology,", "year": 2007 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "M. Horowitz" ], "title": "computing’s energy problem (and what we can do about it)", "venue": "IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC),", "year": 2014 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "CoRR, abs/1704.04861,", "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": "CoRR, abs/1609.07061,", "year": 2016 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate", "venue": "shift. CoRR,", "year": 2015 }, { "authors": [ "Vidit Jain", "Erik Learned-Miller" ], "title": "Fddb: A benchmark for face detection in unconstrained settings", "venue": null, "year": 2010 }, { "authors": [ "Yunho Jeon", "Junmo Kim" ], "title": "Active convolution: Learning the shape of convolution for image classification", "venue": "CoRR, abs/1703.09076,", "year": 2017 }, { "authors": [ "Yunho Jeon", "Junmo Kim" ], "title": "Constructing fast network through deconstruction of convolution", "venue": "CoRR, abs/1806.07370,", "year": 2018 }, { "authors": [ "Felix Juefei-Xu", "Vishnu Naresh Boddeti", "Marios Savvides" ], "title": "Local binary convolutional neural networks", "venue": "CoRR, abs/1608.06049,", "year": 2016 }, { "authors": [ "Chi-Wah Kok" ], "title": "Fast algorithm for computing discrete cosine transform", "venue": "IEEE Transactions on Signal Processing,", "year": 1997 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet V2: practical guidelines for efficient CNN architecture", "venue": "design. CoRR,", "year": 2018 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "W.K. Pratt", "J. Kane", "H.C. Andrews" ], "title": "Hadamard transform image coding", "venue": "Proceedings of the IEEE,", "year": 1969 }, { "authors": [ "K Ramamohan Rao", "Ping Yip" ], "title": "Discrete cosine transform: algorithms, advantages, applications", "venue": "Academic press,", "year": 2014 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "arXiv preprint arXiv:1802.01548,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", "venue": "CoRR, abs/1801.04381,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "CoRR, abs/1602.07261,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Vincent Vanhoucke", "Andrew Senior", "Mark Z Mao" ], "title": "Improving the speed of neural networks on cpus", "venue": null, "year": 2011 }, { "authors": [ "AliReza Vard", "AmirHassan Monadjemi", "Kamal Jamshidi", "Naser Movahhedinia" ], "title": "Fast texture energy based image segmentation using directional walsh–hadamard transform and parametric active contour models", "venue": "Expert Systems with Applications,", "year": 2011 }, { "authors": [ "Shuo Yang", "Ping Luo", "Chen-Change Loy", "Xiaoou Tang" ], "title": "Wider face: A face detection benchmark", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shifeng Zhang", "Xiangyu Zhu", "Zhen Lei", "Hailin Shi", "Xiaobo Wang", "Stan Z Li" ], "title": "S3fd: Single shot scale-invariant face detector", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR, abs/1707.01083, 2017b", "venue": "URL http: //arxiv.org/abs/1707.01083", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Zhang" ], "title": "2017a) which is one of the baseline methods", "venue": null, "year": 2017 }, { "authors": [ "Zhang" ], "title": "2017a), we set MobileNet-V1 0.25x as our baseline backbone model", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large Convolutional Neural Networks (CNNs) (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016; Szegedy et al., 2016b;a) and automatic Neural Architecture Search (NAS) based networks (Zoph et al., 2018; Liu et al., 2018; Real et al., 2018) have evolved to show remarkable accuracy on various tasks such as image classification (Deng et al., 2009; Krizhevsky & Hinton, 2009), object detection (Lin et al., 2014), benefited from huge amount of learnable parameters and computations. However, these large number of weights and high computational cost enabled only limited applications for mobile devices that require the constraint on memory space being low as well as for devices that require real-time computations (Canziani et al., 2016).\nWith regard to solving these problems, Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2017b); Ma et al. (2018) proposed parameter and computation efficient blocks while maintaining almost same accuracy compared to other heavy CNN models. All of these blocks utilized depthwise separable convolution, which deconstructed the standard convolution with the (3 × 3 × C) size for each kernel into spatial information specific depthwise convolution (3 × 3 × 1) and channel information specific pointwise (1 × 1 × C) convolution. The depthwise separable convolution achieved comparable accuracy compared to standard spatial convolution with hugely reduced parameters and FLOPs. These reduced resource requirements made the depthwise separable convolution as well as pointwise convolution (PC) more widely used in modern CNN architectures.\nNevertheless, we point out that the existing PC layer is still computationally expensive and occupies a lot of proportion in the number of weight parameters (Howard et al., 2017). Although the demand toward PC layer has been and will be growing exponentially in modern neural network architectures, there has been a little research on improving the naive structure of itself.\nTherefore, this paper proposes a new PC layer formulated by non-parametric and extremely fast conventional transforms. Conventional transforms that we applied on CNN models are Discrete\nWalsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT), which have widely been used in image processing but rarely been applied in CNNs (Ghosh & Chellappa, 2016).\nWe empirically found that although both of these transforms do not require any learnable parameters at all, they show the sufficient ability to capture the cross-channel correlations. This non-parametric property enables our proposed CNN models to be significantly compressed in terms of the number of parameters, leading to get the advantages (i.e. efficient distributed training, less communication between server and clients) referred by Iandola et al. (2016). We note that especially DWHT is considered to be a good replacement of the conventional PC layer, as it requires no floating point multiplications but only additions and subtractions by which the computation overheads of PC layers can significantly be reduced. Furthermore, DWHT can take a strong advantage of its fast version where the computation complexity of the floating point operations is reduced from O(n2) to O(n log n). These non-parametric and low computational properties construct extremely efficient neural network from the perspective of parameter and computation as well as enjoying accuracy gain.\nOur contributions are summarized as follows:\n• We propose a new PC layer formulated with conventional transforms which do not require any learnable parameters as well as significantly reducing the number of floating point operations compared to the existing PC layer.\n• The great benefits of using the bases of existing transforms come from their fast versions, which drastically decrease computation complexity in neural networks without degrading accuracy.\n• We found that applying ReLU after conventional transforms discards important information extracted, leading to significant drop in accuracy. Based on this finding, we propose the optimal computation block for conventional transforms.\n• We also found that the conventional transforms can effectively be used especially for extracting high-level features in neural networks. Based on this, we propose a new transformbased neural network architecture. Specifically, using DWHT, our proposed method yields 1.49% accuracy gain as well as 79.4% and 49.4% reduced parameters and FLOPs, respectively, compared with its baseline model (MobileNet-V1) on CIFAR 100 dataset." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 DECONSTRUCTION AND DECOMPOSITION OF CONVOLUTIONS", "text": "For reducing computation complexity of the existing convolution methods, several approaches of rethinking and deconstructing the naive convolution structures have been proposed. Simonyan & Zisserman (2014) factorized a large sized kernel (e.g. 5 × 5) in a convolution layer into several small size kernels (e.g. 3 × 3) with several convolution layers. Jeon & Kim (2017) pointed out the limitation of existing convolution that it has the fixed receptive field. Consequently, they introduced learnable spatial displacement parameters, showing flexibility of convolution. Based on Jeon & Kim (2017), Jeon & Kim (2018) proved that the standard convolution can be deconstructed as a single PC layer with the spatially shifted channels. Based on that, they proposed a very efficient convolution layer, namely active shift layer, by replacing spatial convolutions with shift operations.\nIt is worth noting that the existing PC layer takes the huge proportion of computation and the number of weight parameters in modern lightweight CNN models (Howard et al., 2017; Sandler et al., 2018; Ma et al., 2018). Specifically, MobileNet-V1 (Howard et al., 2017) requires 94%, 74% of the overall computational cost and the overall number of weight parameters for the existing PC layer, respectively. Therefore, there were attempts to reduce computation complexity of PC layer. Zhang et al. (2017b) proposed ShuffleNet-V1 where the features are decomposed into several groups over channels and PC operation was conducted for each group, thus reducing the number of weight parameters and FLOPs by the number of groups G. However, it was proved in Ma et al. (2018) that the memory access cost increases as G increases, leading to slower inference speed. Similarly to the aforementioned methods, our work is to reduce computation complexity and the number of weight parameters in a convolution layer. However, our objective is more oriented on finding out mathe-\nmatically efficient algorithms that make the weights in convolution kernels more effective in feature representation as well as efficient in terms of computation." }, { "heading": "2.2 QUANTIZATION", "text": "Quantization in neural networks reduced the number of bits utilized to represent the weights and/or activations. Vanhoucke et al. (2011) applied 8-bit quantization on weight parameters, which enabled considerable speed-up with small drop of accuracy. Gupta et al. (2015) applied 16-bit fixed point representation with stochastic rounding. Based on Han et al. (2015b) which pruned the unimportant weight connections through thresholding the values of weight, Han et al. (2015a) successfully combined the pruning with 8 bits or less quantization and huffman encoding. The extreme case of quantized networks was evolved from Courbariaux et al. (2015), which approximated weights with the binary (+1,−1) values. From the milestone of Courbariaux et al. (2015), Courbariaux & Bengio (2016); Hubara et al. (2016) constructed Binarized Neural Networks which either stochastically or deterministically binarize the real value weights and activations. These binarized weights and activations lead to significantly reduced run-time by replacing floating point multiplications with 1-bit XNOR operations.\nBased on Binarized Neural Networks (Courbariaux & Bengio, 2016; Hubara et al., 2016), Local Binary CNN (Juefei-Xu et al., 2016) proposed a convolution module that utilizes binarized nonlearnable weights in spatial convolution based on Local Binary Patterns, thus replacing multiplications with addition/subtraction operations in spatial convolution. However, they did not consider reducing computation complexity in PC layer and remained the weights of PC layer learnable floating point variables. Our work shares the similarity to Local Binary CNN (Juefei-Xu et al., 2016) in using binary fixed weight values. However, Local Binary Patterns have some limitations for being applied in CNN since they can only be used in spatial convolution as well as there are no approaches that enable fast computation of them." }, { "heading": "2.3 CONVENTIONAL TRANSFORMS", "text": "In general, several transform techniques have been applied for image processing. Discrete Cosine Transform (DCT) has been used as a powerful feature extractor (Dabbaghchian et al., 2010). For N -point input sequence, the basis kernel of DCT is defined as a list of cosine values as below:\nCm = [cos( (2x+ 1)mπ\n2N )], 0 ≤ x ≤ N − 1 (1)\nwhere m is the index of a basis and captures higher frequency information in the input signal as m increases. This property led DCT to be widely applied in image/video compression techniques that emphasize the powers of image signals in low frequency regions (Rao & Yip, 2014).\nDiscrete Walsh Hadamard Transform (DWHT) is a very fast and efficient transform by using only +1 and −1 elements in kernels. These binary elements in kernels allow DWHT to perform without any multiplication operations but addition/subtraction operations. Therefore, DWHT has been widely used for fast feature extraction in many practical applications, such as texture image segmentation (Vard et al., 2011), face recognition (Hassan et al., 2007), and video shot boundary detection (G. & S., 2014).\nFurther, DWHT can take advantage of a structured-wiring-based fast algorithm (Algorithm 1) as well as allowing very high efficiency in encoding the spatial information (Pratt et al., 1969). The basis kernel matrix of DWHT is defined using the previous kernel matrix as below:\nHD =\n( HD−1 HD−1 HD−1 −HD−1 ) , (2)\nwhere H0 = 1 and D ≥ 1. In this paper we denote HDm as the m-th row vector of HD in Eq. 2. Additionally, we adopt fast DWHT algorithm to reduce computation complexity of PC layer in neural networks, resulting in an extremely fast and efficient neural network." }, { "heading": "3 METHOD", "text": "We propose a new PC layer which is computed with conventional transforms. The conventional PC layer can be formulated as follows:\nZijm =W > m ·Xij , 1 ≤ m ≤M (3)\nwhere (i, j) is a spatial index, and m is output channel index. In Eq. 3, N and M are the number of input and output channels, respectively. Xij ∈ RN is a vector of input X at the spatial index (i, j), Wm ∈ RN is a vector of m-th weight W in Eq. 3. For simplicity, the stride is set as 1 and the bias is omitted in Eq. 3.\nOur proposed method is to replace the learnable parameters Wm with the bases in the conventional transforms. For example, replacing Wm with HDm in Eq. 3, we now can formulate the new multiplication-free PC layer using DWHT. Similarly, the DCT basis kernels Cm in Eq. 1 can substitute for Wm in Eq. 3, formulating another new PC layer using DCT. Note that the normalization factors in the conventional transforms are not applied in the proposed PC layer, because Batch Normalization (Ioffe & Szegedy, 2015) performs a normalization and a linear transform which can be viewed as a normalization in the existing transforms.\nThe most important benefit of the proposed method comes from the fact that the fast algorithms of the existing transforms can be applied in the proposed PC layers for further reduction of computation. Directly applying above new PC layer gives computational complexity of O(N2). Adopting the fast algorithms, we can significantly reduce the computation complexity of PC layer fromO(N2) to O(NlogN) without any change of the computation results. We demonstrate the pseudo-code of our proposed fast PC layer using DWHT in Algorithm 1 based on the Fast DWHT structure shown in Figure 1a. In Algorithm 1, for logN iterations, the evenindexed channels and odd-indexed channels are added and subtracted in element-wise manner, respectively. The resulting elements which were added and subtracted are placed in the first N/2 elements and the last N/2 elements of the input of next iteration, respectively. In this computation process, each iteration requires only N operations of additions or subtractions. Consequently, Algorithm 1 gives us complexity of O(NlogN) in addition or subtraction. Compared to the existing PC layer that requires complexity of O(N2) in multiplication, our method is extremely cheaper than the conventional PC layer in terms of computation costs as shown in Figure 1b and in power consumption of computing devices (Horowitz, 2014). Note that, similarly to fast DWHT, DCT can also be computed in a fast manner that recursively decomposes the N -point input sequence into two subproblems of N/2-point DCT (Kok, 1997).\nCompared to DWHT, DCT takes advantage of using more natural shapes of cosine basis kernels, which tend to provide better feature extraction performance through capturing the frequency information. However, DCT inevitably needs multiplications for inner product betweenC andX vectors, and a look up table (LUT) for computing cosine kernel bases which can increase the processing time and memory access. On the other hand, as mentioned, the kernels of DWHT consist only of +1,−1 which allows for building a multiplication-free module. Furthermore, any memory access towards kernel bases is not needed if our structured-wiring-based fast DWHT algorithm (Algorithm 1) is applied. Our comprehensive experiments in Section 3.1 and 3.2 show that DWHT is more efficient than DCT in being applied in PC layer in terms of trade-off between the complexity of computation cost and accuracy.\nNote that, for securing more general formulation of our newly defined PC layer, we padded zeros along the channel axis if the number of input channels is less than that of output channels while truncating the output channels when the number of output channels shrink compared to that of input channels as shown in Algorithm 1.\nFigure 1a shows the architecture of fast DWHT algorithm described in Algorithm 1. This structuredwiring-based architecture ensures that the receptive field of each output channels isN , which means each output channel is fully reflected against all input channels through log2N iterations. This fullyreflected property helps to capture the input channel correlations in spite of the computation process of what channel elements will be added and subtracted being structured in a deterministic manner.\nFor successfully fusing our new PC layer into neural networks, we explored two themes: i) an optimal block search for the proposed PC; ii) an optimal insertion strategy of the proposed block found by i), in a hierarchical manner on the blocks of networks. We assumed that there are an optimal block unit structure and an optimal hierarchy level (high-, middle-, low-level) blocks in the neural networks favored by these non-learnable transforms. Therefore, we conducted the experiments for the two aforementioned themes accordingly. We evaluated the effectiveness for each of our networks by accuracy fluctuation as the number of learnable weight parameters or FLOPs changes. For comparison, we counted total FLOPs with summation of the number of multiplications, additions and subtractions performed during the inference. Unless mentioned, we followed the default experimental setting as 128 batch size, 200 training epochs, 0.1 initial learning rate where 0.94 is multiplied per 2 epochs, and 0.9 momentum with 5e-4 weight decay value. In all the experiments, the model accuracy was obtained by taking an average of Top-1 accuracy values from three independent training results.\nAlgorithm 1 A new pointwise convolution using fast DWHT algorithm Input: 4D input features X(B ×N ×H ×W ), output channel M\n1: n← log2N 2: if N<M then 3: ZeroPad1D(X , axis=1) . pad zeros along the channel axis 4: end if 5: for i← 1 to n do 6: e← X[:, :: 2, :, :] 7: o← X[:, 1 :: 2, :, :] 8: X[:, : N/2, :, :]← e+ o 9: X[:, N/2 :, :, :]← e− o\n10: end for 11: if N>M then 12: X ← X[:, :M, :, :] 13: end if" }, { "heading": "3.1 OPTIMAL BLOCK STRUCTURE FOR THE CONVENTIONAL TRANSFORMS", "text": "From a microscopic perspective, the block unit is the basic foundation of neural networks, and it determines the efficiency of the weight parameter space and computation costs in terms of accuracy. Accordingly, to find the optimal block structure for our proposed PC layer, our experiments are conducted to replace the existing PC layer blocks with our new PC layer blocks in ShuffleNet-V2 (Ma et al., 2018). The proposed block and its variant blocks are listed in Figure 2. Comparing the results of (c) and (d) in Table 1 informs us the important fact that the ReLU (Nair & Hinton, 2010) activation function significantly harms the accuracy of our neural networks equipped with the conventional transforms. We empirically analyzed this phenomenon in Section 4.1. Additionally, comparing the accuracy results of (b) and (d) in Table 1 denotes that the proposed PC layers are superior to the PC layer which randomly initialized and fixed its weights to be non-learnable. These results imply that DWHT and DCT kernels can better extract meaningful information of cross-channel correlations compared to the kernels which are randomly initialized and non-learnable. Compared to the baseline model in Table 1, (d)-DCT w/o ReLU and (d)-DWHT w/o ReLU blocks show accuracy drop by approximately 2.3% under the condition that 42% and 49.5% of learnable weight parameters and FLOPs are reduced, respectively. These results imply that the proposed blocks (c) and (d) are still inefficient in trade-off between accuracy and computation costs of neural networks, leading us to more explore to find out an optimal neural network architecture. In the next subsection, we address this problem through applying conventional transforms on the optimal hierarchy level features (See Section 3.2). Based on our comprehensive experiments, we set the block structure (d) as our default proposed block which will be exploited in all the following experiments." }, { "heading": "3.2 OPTIMAL HIERARCHY LEVEL BLOCKS FOR CONVENTIONAL TRANSFORMS", "text": "In this section, we search on an optimal hierarchy level where our optimal block which is based on the proposed PC layer is effectively applied in a whole network architecture. The optimal hierarchy level will allow the proposed network to have the minimal number of learnable weight parameters and FLOPs without accuracy drop, which is made possible by non-parametric and extremely fast conventional transforms. It is noted that applying our proposed block on the high-level blocks in the network provides much more reduced number of parameters and FLOPs rather than applying on low-level blocks, because channel depth increases exponentially as the layer goes deeper in the network.\nIn Figure 3, we applied our optimal block (i.e. (d) block in Figure 2) on high- , middle- and low-level blocks, respectively. In our experiments, we evaluate the performance of the networks depending on the number of blocks where the proposed optimal block is applied. The model that we have tested is denoted as (transform type)-(# of the proposed blocks)-(hierarchy level in Low (L), Middle (M), and High (H) where the proposed optimal block is applied). For example, DWHT-3-L indicates the neural network model where the first three blocks in ShuffleNet-V2 consist of the proposed blocks, while the other blocks are the original blocks of ShuffleNet-V2. It is noted that in this experiment, we fix all the blocks with stride = 2 in the baseline model to be original ShuffleNet-V2 (Ma et al., 2018) stride = 2 blocks.\nFigure 3 shows the performance of the proposed methods depending on the transform types {DCT, DWHT}, hierarchy levels {L, M, H} and the number of the proposed blocks that replace the original\nones in the baseline {3, 6, 10} in terms of Top-1 accuracy and the number of learnable weight parameters (or FLOPs). It is noted that, since the baseline model has only 7 blocks in the middlelevel Stage (i.e. Stage3), we performed the middle-level experiments only for DCT/DWHT-3-M and -7-M models where the proposed blocks are applied from the end of Stage3 in the baseline model. In Figure 3, the performance of our 10-H (or 10-L), 6-H (or 6-L), 3-H (or 3-L) models (7-M and 3-M only for middle-level experiments) is listed in ascending order of the number of learnable weight parameters and FLOPs.\nAs shown in the first column of Figure 3, applying our optimal block on the high-level blocks achieved much better trade-off between the number of learnable weight parameters (FLOPs) and accuracy. Meanwhile, applying on middle- and low-level features suffered, respectively, slightly and severely from the inefficiency of the number of weight parameters (FLOPs) with regard to accuracy. This tendency is shown similarly for both DWHT-based models and DCT-based models, which implies that there can be an optimal hierarchical level of blocks favored by conventional transforms. Also note that our DWHT-based models showed slightly higher or same accuracy with less FLOPs in all the hierarchy level cases compared to our DCT-based models. This is because the fast version of DWHT does not require any multiplication but needs less amount of addition or subtraction operations compared to the fast version of DCT while it also has the sufficient ability to extract cross-channel information with the exquisite wiring-based structure.\nFor verifying the generality of the proposed method, we also applied our methods into MobileNetV1 (Howard et al., 2017). Inspired by the above results showing that optimal hierarchy blocks for conventional transforms can be found in the high-level blocks, we replaced high-level blocks of baseline model (MobileNet-V1) and changed the number of proposed blocks that are replaced to verify the effectiveness of the proposed method. The experimental results are described in Table 2. Remarkably, as shown in Table 2, our DWHT-6-H model yielded the 1.49% increase in Top-1 accuracy even under the condition that the 79.4% of parameters and 49.4% of FLOPs are reduced compared with the baseline 1x model. This outstanding performance improvement comes from the depthwise separable convolutions used in MobileNet-V1, where PC layers play dominant roles in computation costs and memory space, i.e. they consume 94.86% in FLOPs and 74% in the total number of parameters in the whole network (Howard et al., 2017). The full performance results for all the hierarchy levels {L, M, H} and the number of blocks {3, 6, 10} (exceptionally, {3, 7} blocks for the middle level experiments) are described in Appendix A.\nIn Appendix A, based on the comprehensive experiments it can be concluded that i) the proposed PC block always shows its better efficiency of number of parameters and FLOPs when applied on high-levels compared to when applied on low-level in the network hierarchy; ii) the performance gain start to decrease when the number of transform based PC blocks exceeded a certain capacity of the networks." }, { "heading": "4 EXPERIMENTS AND ANALYSIS", "text": "In this section, we analyze the significant accuracy degradation of applying ReLU after our proposed PC layer. Additionally, we analyze the active utilization of 3x3 depthwise convolution weight kernel values which takes an auxiliary role for conventional transform being non-learnable." }, { "heading": "4.1 HINDRANCE OF RELU IN CROSS-CHANNEL REPRESENTABILITY", "text": "As shown in Table 1, applying ReLU after conventional transforms significantly harmed the accuracy. This is due to the properties of conventional transform basis kernels that both HDm in Eq. 2 and Cm in Eq. 1 have the same number of positive and negative parameters in the kernels except for m = 0 and that the distributions of absolute values of positive and negative elements in kernels are almost identical. These properties let us know that the output channel elements that have under zero value should also be considered during the forward pass; when forwarding Xij in Eq. 3 through the conventional transforms if some important channel elements in Xij that have larger values than others are combined with negative values of Cm or HDm , the important cross-channel information in the output Zijm in Eq. 3 can reside in the value range under zero. Figure 4 shows that all the hierarchy level activations from both DCT and DWHT based PC layer have not only positive values but also negative values in almost same proportion. These negative values possibly include important cross-channel correlation information. Thus, applying ReLU on activations of PC layers which are based on conventional transforms discards crucial cross-channel information contained in negative values that must be forwarded through, leading to significant accuracy drop as shown in the results of Table 1. Figure 6 empirically demonstrates above theoretical analysis by showing that as the negative value regions are fully ignored (i.e. F = ReLU), the accuracy is significantly degraded while fully reflecting the negative value regions (i.e. g = 1) shows the best accuracy. From above kernel value based analysis and its experiment, we do not use non-linear activation function after the proposed PC layer." }, { "heading": "4.2 ACTIVE 3× 3 DEPTHWISE CONVOLUTION WEIGHTS", "text": "In Figure 5 and Appendix B, it is observed that 3×3 depthwise convolution weights of last 3 blocks in DWHT-3-H and DCT-3-H have much less near zero values than that of baseline model. That\nis, the number of values which are apart from near-zero is much larger on DCT-3-H and DWHT3-H models than on baseline model. We conjecture that these learnable weights whose values are apart from near-zero were actively fitted to the optimal domain that is favored by conventional transforms. Consequently, these weights are actively and sufficiently utilized to take the auxiliary role for conventional transforms which are non-learnable, deriving accuracy increase compared to the conventional PC layer as shown in Figure 3.\nTo verify the impact of activeness of these 3× 3 depthwise convolution weights in the last 3 blocks, we experimented with regularizing these weights varying the weight decay values. Higher weight decay values strongly regularize the scale of 3× 3 depthwise convolution weight values in the last 3 blocks. Thus, strong constraint on the scale of these weight values hinders active utilization of these weights, which results in accuracy drop as shown in Figure 7." }, { "heading": "5 CONCLUSION", "text": "We propose the new PC layers through conventional transforms. Our new PC layers allow the neural networks to be efficient in complexity of computation and learnable weight parameters. Especially for DWHT-based PC layer, its floating point multiplication-free property enabled extremely efficient in computation overhead. With the purpose of successfully fusing our PC layers into neural networks, we empirically found the optimal block unit structure and hierarchy level blocks in neural networks for conventional transforms, showing accuracy increase and great representability in cross-channel correlations. We further intrinsically revealed the hindrance of ReLU toward capturing the cross-channel representability and the activeness of depthwise convolution weights on the last blocks in our proposed neural network." }, { "heading": "A GENERALITY OF APPLYING PROPOSED PC LAYERS IN OTHER NEURAL NETWORKS", "text": "In Figure 8, for the purpose of finding more definite hierarchy level of blocks favored by our proposed PC layers, we subdivided our middle level experiment scheme; DCT/DWHT-3-M-Front model denotes the model which applied the proposed blocks from the beginning of Stage3 in the baseline while DCT/DWHT-3-M-Rear model denotes the model which applied from the end of Stage3. The performance curves of all our proposed models in Figure 8 show that if we apply the proposed optimal block within the first 6 blocks in the network, the Top-1 accuracy is mildly or significantly deteriorated compared to the required computational cost and number of learnable parameters, informing us the important fact that there are the definite hierarchy level blocks which are favored or not favored by our proposed PC layers in the network." }, { "heading": "B HISTOGRAM OF 3× 3 DEPTHWISE CONVOLUTION WEIGHTS IN HIGH-LEVEL BLOCKS", "text": "" }, { "heading": "C PERFORMANCE COMPARISON BETWEEN RCPC AND PROPOSED PC LAYERS", "text": "For the purpose of demonstrating the superiority of our proposed DCT/DWHT based PC layers over RCPC layer in all the hierarchical (i.e. low/mid/high) level layers, we compared the performance trade-off in Figure 10. It is noted that DCT/DWHT based PC layers almost always get higher accuracy than RCPC layer in all the hierarchical level layers. Comparing the distance between the orange or green line with the red line in Figure 10, our DCT/DWHT based PC layers showed high efficiency in trade-off between accuracy and the computational costs or number of learnable parameters, compared to RCPC layer in almost all the hierarchical levels." }, { "heading": "D GENERALITY OF APPLYING OUR PROPOSED PC LAYERS ON OTHER TASKS", "text": "In order to demonstrate the domain-generality of the proposed method, we conducted comprehensive experiments on applying our proposed PC layers to object detection, specifically to the face detection task. For face detection schemes such as anchor design, data augmentation and featuremap resolution design, we followed Zhang et al. (2017a) which is one of the baseline methods in face detection field. It is noted that there is a huge demand on real-time face detection algorithms having high detection accuracy, which leads us to applying our PC layers to a lightweight face detection network. Therefore, instead of using VGG16 (Simonyan & Zisserman, 2014) as backbone network as in Zhang et al. (2017a), we set MobileNet-V1 0.25x as our baseline backbone model where extra depthwise separable blocks are added for detecting more diverse scales of face in the images. In this baseline model, we replaced the conventional PC layers within last 3, 6 blocks with our DCT/DWHT based PC layers. We trained all the models on the WIDER FACE (Yang et al., 2016) train dataset and evaluated on WIDER FACE validation dataset and Face Detection Data Set and Benchmark (FDDB) dataset (Jain & Learned-Miller, 2010). WIDER FACE validation set has Easy, Medium and Hard subsets, which correspond to large, medium and small scale faces, respectively. Validation results of the baseline model and our proposed DCT/DWHT models on WIDER FACE are described in Table 3.\nIn Table 3, we note that, overall, our DWHT-3-H and DWHT-6-H models showed comparable or even higher mAP values than the baseline model on all the subsets (Easy, Medium, and Hard) with significantly reduced number of learnable parameters and FLOPs. Especially, DWHT-3-H model achieved 0.27% higher mAP than the baseline model under the condition that 79% of parameters and 16% of FLOPs are reduced on Hard subset. Regarding DCT-3-H and DCT-6-H models, they showed a solid improvement of mAP on Easy and Medium subsets with significantly reduced number of parameters and FLOPs compared to the baseline model.\nAdditionally, we verified the effectiveness of the proposed method on the FDDB dataset in Table 4. We note that our DWHT-6-H and DWHT-3-H models showed comparable or even 0.09% higher AP than the baseline model with significantly reduced number of learnable parameters and FLOPs. On the other hand, our DCT-6-H and DCT-3-H models showed a small degree of degradation on AP compared to the baseline model, which is a mild degradation considering the reduced amount of parameters and FLOPs.\nConsequently, our comprehensive experiments on both WIDER FACE and FDDB datasets reveal the generality of our proposed method, enabling neural networks to be extremely lightweight and reduce the computational overhead." } ]
2,019
null
SP:63ad3be1dae7ede5c02a847304072c1cbc91b1cb
[ "This paper proposes to model various uncertainty measures in Graph Convolutional Networks (GCN) by Bayesian MC Dropout. Compared to existing Bayesian GCN methods, this work stands out in two aspects: 1) in terms of prediction, it considers multiple uncertainty measures including aleatoric, epistemic, vacuity and dissonance (see paper for definitions); 2) in terms of generative modeling, the GCN first predicts the parameters of a Dirichlet distribution, and then the class probabilities are sampled from the Dirichlet. Training/inference roughly follows MC Dropout, with two additional priors/teachers: 1) the prediction task is guided by a deterministic teacher network (via KL(model || teacher)), and 2) the Dirichlet parameters are guided by a kernel-based prior (via KL(model || prior)). Experiments on six datasets showed superior performance in terms of the end prediction task, as well as better uncertainty modeling in terms of out-of-distribution detection.", "The authors proposed a Bayesian graph neural network framework for node classification. The proposed models outperformed the baselines in six node classification tasks. The main contribution is to evaluate various uncertainty measures for the uncertainty analysis of Bayesian graph neural networks. The authors show that vacuity and aleatoric measure are important to detect out-of-distribution and the dissonance uncertainty plays a key role for improving performance." ]
Thanks to graph neural networks (GNNs), semi-supervised node classification has shown the state-of-the-art performance in graph data. However, GNNs have not considered different types of uncertainties associated with the class probabilities to minimize risk increasing misclassification under uncertainty in real life. In this work, we propose a Subjective Bayesian deep learning framework reflecting various types of uncertainties for classification predictions by leveraging the powerful modeling and learning capabilities of GNNs. We considered multiple uncertainty types in both deep learning (DL) and belief/evidence theory domains. We treat the predictions of a Subjective Bayesian GNN (S-BGNN) as nodes’ multinomial subjective opinions in a graph based on Dirichlet distributions where each belief mass is a belief probability of each class. By collecting evidence from the given labels of training nodes, the S-BGNN model is designed for accurately predicting probabilities of each class and detecting out-of-distribution. We validated the outperformance of the proposed S-BGNN, compared to the state-of-the-art counterparts in terms of the accuracy of node classification prediction and out-ofdistribution detection based on six real network datasets.
[]
[ { "authors": [ "Clarence W De Silva" ], "title": "Intelligent control: fuzzy logic applications", "venue": "CRC press,", "year": 2018 }, { "authors": [ "Dhivya Eswaran", "Stephan Günnemann", "Christos Faloutsos" ], "title": "The power of certainty: A dirichletmultinomial model for belief propagation", "venue": "In Proceedings of the 2017 SIAM International Conference on Data Mining,", "year": 2017 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "University of Cambridge,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Bayesian convolutional neural networks with bernoulli approximate variational inference", "venue": "arXiv preprint arXiv:1506.02158,", "year": 2015 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In AISTATS, pp", "year": 2010 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Audun Josang", "Jin-Hee Cho", "Feng Chen" ], "title": "Uncertainty characteristics of subjective opinions", "venue": "In FUSION,", "year": 1998 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Alex Kendall", "Vijay Badrinarayanan", "Roberto Cipolla" ], "title": "Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding", "venue": "arXiv preprint arXiv:1511.02680,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Qing Lu", "Lise Getoor" ], "title": "Link-based classification", "venue": "In ICML, pp", "year": 2003 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "JMLR, 9(Nov):2579–2605,", "year": 2008 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "arXiv preprint arXiv:1802.10501,", "year": 2018 }, { "authors": [ "Julian McAuley", "Christopher Targett", "Qinfeng Shi", "Anton Van Den Hengel" ], "title": "Image-based recommendations on styles and substitutes", "venue": "In SIGIR,", "year": 2015 }, { "authors": [ "Soumyasundar Pal", "Florence Regol", "Mark Coates" ], "title": "Bayesian graph convolutional neural networks using non-parametric graph learning", "venue": "arXiv preprint arXiv:1910.12132,", "year": 2019 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In KDD,", "year": 2014 }, { "authors": [ "Seongok Ryu", "Yongchan Kwon", "Woo Youn Kim" ], "title": "Uncertainty quantification of molecular property prediction with bayesian neural networks", "venue": null, "year": 1903 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Murat Sensoy", "Lance Kaplan", "Melih Kandemir" ], "title": "Evidential deep learning to quantify classification uncertainty", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Kari Sentz", "Scott Ferson" ], "title": "Combination of evidence in Dempster-Shafer theory, volume 4015", "venue": null, "year": 2002 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Zhilin Yang", "William W Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "arXiv preprint arXiv:1603.08861,", "year": 2016 }, { "authors": [ "Yingxue Zhang", "Soumyasundar Pal", "Mark Coates", "Deniz Üstebay" ], "title": "Bayesian graph convolutional neural networks for semi-supervised classification", "venue": "arXiv preprint arXiv:1811.11103,", "year": 2018 }, { "authors": [ "Yingxue Zhang", "Soumyasundar Pal", "Mark Coates", "Deniz Ustebay" ], "title": "Bayesian graph convolutional neural networks for semi-supervised classification", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Inherent uncertainties introduced by different root causes have emerged as serious hurdles to find effective solutions for real world problems. Critical safety concerns have been brought due to lack of considering diverse causes of uncertainties, resulting in high risk due to misinterpretation of uncertainties (e.g., misdetection or misclassification of an object by an autonomous vehicle). Graph neural networks (GNNs) (Kipf & Welling, 2016; Veličković et al., 2018) have gained tremendous attention in the data science community. Despite their superior performance in semi-supervised node classification and/or regression, they didn’t allow to deal with various types of uncertainties. Predictive uncertainty estimation (Malinin & Gales, 2018) using Bayesian NNs (BNNs) has been explored for classification prediction or regression in the computer vision applications, with wellknown uncertainties, aleatoric and epistemic uncertainties. Aleatoric uncertainty only considers data uncertainty derived from statistical randomness (e.g., inherent noises in observations) while epistemic uncertainty indicates model uncertainty due to limited knowledge or ignorance in collected data. On the other hand, in the belief or evidence theory, Subjective Logic (SL) (Josang et al., 2018) considered vacuity (or lack of evidence) as uncertainty in an subjective opinion. Recently other uncertainties such as dissonance, consonance, vagueness, and monosonance (Josang et al., 2018) are also introduced. This work is the first that considers multidimensional uncertainty types in both DL and belief theory domains to predict node classification and out-of-distribution (OOD) detection. To this end, we incorporate the multidimensional uncertainty, including vacuity, dissonance, aleatoric uncertainty, and epistemic uncertainty in selecting test nodes for Bayesian DL in GNNs. We perform semi-supervised node classification and OOD detection based on GNNs. By leveraging the modeling and learning capability of GNNs and considering multidimensional uncertainties in SL, we propose a Bayesian DL framework that allows simultaneous estimation of different uncertainty types associated with the predicted class probabilities of the test nodes generated by GNNs. We treat the predictions of a Subjective Bayesian GNN (S-BGNN) as nodes’ subjective opinions in a graph modeled as Dirichlet distributions on the class probabilities, and learn the S-BGNN model by collecting the evidence from the given labels of the training nodes (see Figure 1). This work has the following key contributions: • A Subjective Bayesian framework to predictive uncertainty estimation for GNNs. Our pro-\nposed framework directly predicts subjective multinomial opinions of the test nodes in a graph,\nwith the opinions following Dirichlet distributions with each belief probability as a class probability. Our proposed framework is a generative model, so it cal be highly applicable across all GNNs and allows simultaneously estimating different types of associated uncertainties with the class probabilities. • Efficient approximate inference algorithms: We propose a Graph-based Kernel Dirichlet distribution Estimation (GKDE) method to reduce error in predicting Dirichlet distribution. We designed an iterative knowledge distillation algorithm that treats a deterministic GNN as a teacher network while considering our proposed Subjective Bayesian GNN model (a realization of our proposed framework for a specific GNN) as a distilled network. This allows the expected class probabilities based on the predicted Dirichlet distributions (i.e., outputs of our trained Bayesian model) to match the predicted class probabilities of the deterministic GNN model, along with uncertainty estimated in the predictions. • Comprehensive experiments for the validation of the performance of our proposed framework. Based on six real graph datasets, we compared the performance of our propose framework with that of other competitive DL algorithms. For a fair comparison, we tweaked the DL algorithms to consider various uncertainty types in predicted decisions." }, { "heading": "2 RELATED WORK", "text": "Epistemic Uncertainty in Bayesian Deep Learning (BDL): Machine/deep learning (M/DL) research mainly considered aleatoric uncertainty (AU) and epistemic uncertainty (EU) using BNNs for computer vision applications. AU consists of homoscedastic uncertainty (i.e., constant errors for different inputs) and heteroscedastic uncertainty (i.e., different errors for different inputs) (Gal, 2016). A BDL framework was presented to estimate both AU and DU simultaneously in regression settings (e.g., depth regression) and classification settings (e.g., semantic segmentation) (Kendall & Gal, 2017). Later, a new type of uncertainty, called distributional uncertainty (DU), is defined based on distributional mismatch between the test and training data distributions (Malinin & Gales, 2018). Dropout variational inference (Gal & Ghahramani, 2016) is used as one of key approximate inference techniques in BNNs. Other methods (Eswaran et al., 2017; Zhang et al., 2018) measure overall uncertainty in node classification but didn’t consider uncertainty decomposition and GNNs. Uncertainty Quantification in Belief/Evidence Theory: In the belief/evidence theory domain, uncertainty reasoning has been substantially explored, such as Fuzzy Logic (De Silva, 2018), DempsterShafer Theory (DST) (Sentz et al., 2002), or Subjective Logic (SL) (Jøsang, 2016). Belief theory focuses on reasoning of inherent uncertainty in information resulting from unreliable, incomplete, deceptive, and/or conflicting evidence. SL considered uncertainty in subjective opinions in terms of vacuity (i.e., lack of evidence) and vagueness (i.e., failing in discriminating a belief state) (Jøsang, 2016). Recently, other uncertainty types have been studied, such as dissonance (due to conflicting evidence) and consonance (due to evidence supporting composite states) (Josang et al., 2018). In deep NNs, SL is considered to train a deterministic NN for supervised classification in computer vision applications (Sensoy et al., 2018). However, they didn’t consider a generic way of estimating multidimensional uncertainty using Bayesian DL for GNNs used for the applications in graph data." }, { "heading": "3 PROPOSED APPROACH", "text": "Now we define the problem of uncertainty-aware semi-supervised node classification and then present a Bayesian GNN framework to address the problem." }, { "heading": "3.1 PROBLEM DEFINITION", "text": "Given an input graph G = (V,E, r,yL), where V = {1, · · · , N} is a ground set of nodes, E ⊆ V×V is a ground set of edges, r = [r1, · · · , rN ]T ∈ RN×d is a node-level feature matrix, ri ∈ Rd is the feature vector of node i, yL = {yi | i ∈ L} are the labels of the training nodes L ⊂ V, and yi ∈ {1, . . . ,K} is the class label of node i. We aim to predict: (1) the class probabilities of the testing nodes: pV\\L = {pi ∈ [0, 1]K | i ∈ V \\ L}; and (2) the associated multidimensional uncertainty estimates introduced by different root causes: uV\\L = {ui ∈ [0, 1]m | i ∈ V \\ L}, where pi,k is the probability that the class label yi = k and m is the total number of uncertainty types." }, { "heading": "3.2 MULTIDIMENSIONAL UNCERTAINTY QUANTIFICATION", "text": "Multiple uncertainty types may be estimated, such as aleatoric uncertainty, epistemic uncertainty, vacuity, dissonance, among others. The estimation of the first two types of uncertainty relies on the design of an appropriate Bayesian DL model with parameters, θ. Following (Gal, 2016), node i’s aleatoric uncertainty is: Aleatoric[pi] = EProb(θ|G) [ H(yi|r;θ) ] , where H(·) is Shannon’s entropy of Prob(pi|r;θ). The epistemic uncertainty of node i is estimated by: Epistemic[pi] = H [ EProb(θ|G)[(yi|r;θ)] ] − EProb(θ|G) [ H(yi|r;θ) ] (1)\nwhere the first term indicates entropy (or total uncertainty). Vacuity and dissonance can be estimated based on the subjective opinion for each testing node i (Josang et al., 2018). Denote i’s subjective opinion as [bi1, · · · , biK , vi], where bik(≥ 0) is the belief mass of the k-th category, vi(≥ 0) is the uncertainty mass (i.e., vacuity), and K is the total number of categories, where ∑K k=1 bik + vi = 1. i’s dissonance is obtained by:\nω(bi) = K∑ k=1 (bik∑Kj=1,j 6=k bijBal(bij , bik)∑K j=1,j 6=k bij ) , (2)\nwhere the relative mass balance between a pair of belief masses bij and bik is expressed by Bal(bij , bik) = 1 − |bij − bik|/(bij + bik). To develop a Bayesian GNNs framework that predicts multiple types of uncertainty, we estimate vacuity and dissonance using a Bayesian model. In SL, a multinomial opinion follows a Dirichlet distribution, Dir(pi|αi), where αi ∈ [1,∞]K represents the distribution parameters. Given Si = ∑K k=1 αik, belief mass bi and uncertainty mass vi can be obtained by bik = (αik − 1)/Si and vi = K/Si." }, { "heading": "3.3 PROPOSED BAYESIAN DEEP LEARNING FRAMEWORK", "text": "Let p = [p1, . . . ,pN ]> ∈ RN×K denote the class probabilities of the node in V, where pi = [pi1, . . . , piK ]\n> refers to the class probabilities of a specific node i. As shown in Figure 1, our proposed Bayesian GNN framework can be described by the generative process: • Sample θ from a predefined prior distribution, i.e., N (0, I). • For each node i ∈ V: (1) Sample the class probabilities pi from a Dirichlet distribution: Dir(pi|αi),\nwhere αi = fi(r;θ) is parameterized by a GNN network α = f(r;θ) : RN×d → [1,∞]N×K that takes the attribute matrix r as input and directly outputs all the node-level Dirichlet parameters α = [α1, · · · ,αN ], and θ refer to the hyper-parameters of the GNN network; and (2) Sample yi ∼ Cat(yi|pi), a categorical distribution on pi.\nIn this design, the graph dependencies among the class labels in yL and yV\\L are modeled via the GNN network f(r;θ). Our proposed framework is different from the traditional Bayesian GNN network (Zhang et al., 2018) in that the output of the former are the parameters of node-level Dirichlet distributions (α), but the output of the latter are directly node-level class probabilities (p). The conditional probability of p, Prob(p|r;θ), can be obtained by:\nProb(p|r;θ) = ∏N\ni=1 Dir(pi|αi), αi = fi(r;θ) (3)\nwhere the Dirichlet probability function Dir(pi|αi) is defined by:\nDir(pi|αi) = Γ(Si)∏K\nk=1 Γ(αik)\n∏K k=1 pαik−1ik , Si = ∑K k=1 αik (4)\nBased on the proposed Bayesian GNN framework, the joint probability of y conditioned on the input graph G and the node-level feature matrix r can be estimated by:\nProb(y|r;G) = ∫ ∫ Prob(y|p)Prob(p|r;θ)Prob(θ|G)dpdθ, (5)\nwhere Prob(θ|G) is the posterior probability of the parameters θ conditioned on the input graph G, which are estimated in Sections 3.4 and 3.6. The aleatoric uncertainty and the epistemic uncertainty can be estimated using the equations described in Section 3.2. The vacuity associated with the class probabilities (pi) of node i can be estimated by: Vacuity(pi) = EProb(θ|G)[vi] = EProb(θ|G) [ K/ ∑K k=1 αik ] . The dissonance of node i\nis estimated as: Disso.[pi] = EProb(θ|G) [ ω(bi) ] , where ω(bi) is defined in Eq. (2)." }, { "heading": "3.4 BAYESIAN INFERENCE WITH DROPOUT", "text": "The marginalization in Eq. (5) is generally intractable. A dropout technique is used to obtain an approximate solution and use samples from the posterior distribution of models (Gal & Ghahramani, 2016). Due to this reason, we adopt a dropout technique in (Gal & Ghahramani, 2015) for variational inference in Bayesian CNNs where Bernoulli distributions are assumed over the network’s weights. This dropout technique allows us to perform probabilistic inference over our Bayesian DL framework using GNNs. For Bayesian inference, we identify a posterior distribution over the network’s weights, given the input graph G and observed labels yL by Prob(θ | G), where θ = {W1, . . . ,WL, b1, ..., bL}, L is the total number of layers and Wi refers to the GNN’s weight matrices of dimensions Pi × Pi−1, and bi is a bias vector of dimensions Pi for layer i = 1, · · · , L. Since the posterior distribution is intractable, we use a variational inference to learn q(θ,γ), a distribution over matrices whose columns are randomly set to zero, approximating the intractable posterior by minimizing the Kullback-Leibler (KL)-divergence between this approximated distribution and the full posterior, which is given by:\nmin γ\nKL(q(θ,γ)‖Prob(θ|G)) (6)\nwhere γ = {M1, . . . ,ML,m1, . . . ,mL} are the variational parameters, where Mi ∈ RPi×Pi−1 and mi ∈ RPi . We define Wi in q(θ,γ) by:\nWi = Midiag([zij ]Pij=1), zij ∼ Bernoulli(di) for i = 1, . . . , L, j = 1, . . . , Pi−1 (7)\nwhere d = {d1, . . . , dL} is the dropout probabilities with zij of Bernoulli distributed random variables. The binary variable zij = 0 corresponds to unit j in layer i − 1 being dropped out as an input to layer i. We can obtain the approximate model of the Gaussian process from (Gal & Ghahramani, 2015). The dropout probabilities, di’s, can be optimized or fixed (Kendall et al., 2015). For simplicity, we fix di’s in our experiments, as it is beyond the scope of our study. In (Gal & Ghahramani, 2015), the minimization of the cross entropy (or square error) loss function is proven to minimize the KL-divergence (see Eq. (6)). Therefore, training the GNN model with stochastic gradient descent enables learning of an approximated distribution of weights, which provides good explainability of data and prevents overfitting. For the dropout inference, we performed training a GNN model with dropout before every weight layer and dropout at test time to sample from the approximate posterior (i.e., stochastic forward passes, a.k.a. Monte Carlo dropout; see Eq. (8)). At the test stage, we infer the joint probability Eq. (5) by:\nProb(y|r;G) ≈ 1 M ∑M m=1 ∫ Prob(y|p)Prob(p|r;θ(m))dp, θ(m) ∼ q(θ), (8)\nwhich can infer the Dirichlet parameters α as: α ≈ 1M ∑M m=1 f(r,θ\n(m)), θ(m) ∼ q(θ). As our model is a generative model to predict Dirichlet distribution parameters, we use a loss function to compute its Bayes risk with respect to the sum of squares loss ‖y− p‖22 by:\nL(γ) = ∑\ni∈L\n∫ ‖yi − pi‖22 · Prob(pi|r;γ)dpi = ∑ i∈L ∑K j=1 ( yij − E[pij ] )2 + Var(pij) (9)\nEq. (9) aims to minimize the prediction error and variance, leading to maximizing classification accuracy of each training node by removing excessive misleading evidence (Sensoy et al., 2018)." }, { "heading": "3.5 GRAPH-BASED KERNEL DIRICHLET DISTRIBUTION ESTIMATION", "text": "To better learn the Dirichlet distribution from our Bayesian GNN framework, we proposed a GraphBased Kernel Dirichlet Distribution Estimation (GKDE). The key idea of GKDE is estimating prior Dirichlet distribution parameters for each node based on training nodes (see Figure 1 (b)). And then, we leave prior Dirichlet distribution in the training process to learn two trends: (i) nodes with high vacuity (due to lack of evidence) will be shown far from training nodes; and (ii) nodes with high dissonance (due to conflicting evidence) will be shown in the class boundary. Based on SL, let each training node represent one evidence for its class label. Denote the contribution of evidence estimation for target node j from node i by h(yi, dis(i, j)) = [h1, . . . , hk, . . . , hK ] ∈ [0, 1]K and hk(yi, dis(i, j)) is obtained by:\nhk(yi, dis(i, j)) = { 0 yi 6= k σ √ 2π · g(dis(i, j)) yi = k (10)\nwhere g(dis(i, j)) = 1 σ √ 2π e−\ndis(i,j)2\n2σ2 is the Gaussian kernel function to estimate the distribution effect between nodes i and j, and dis(i, j) means the node distance (shortest path between nodes i and j), and σ is the bandwidth parameter. The prior evidence estimation based GKDE: êj =∑ i∈L h(yi, dis(i, j)), and the prior Dirichlet distribution α̂j = êj + 1. During training process, we minimize the KL-divergence between model predictions of Dirichlet distribution and prior distribution: min KL[Dir(α)‖Dir(α̂)]." }, { "heading": "3.6 A TEACHER NETWORK FOR REFINED INFERENCE", "text": "Our key contribution is that the proposed Bayesian GNN model is capable of estimating various uncertainty types to predict existing GNNs. As one of the preferred features, the expected class probabilities generated by our Bayesian GNNs model should be consistent with the predicted class probabilities of the GNN model. In addition, our Bayesian GNN model is a generative model and may not necessarily always outperform GNN models (i.e., discriminative models) for the task of node classification prediction when uncertainty-based prediction is not fully benefited. To refine the inference of our proposed model, we leverage the principles of Knowledge Distillation in DL (Hinton et al., 2015). In particular, we consider our proposed model as a distilled model and a deterministic GNN model as a teacher model, as shown in Figure 1 (c). The key idea is to train our proposed model to imitate the outputs of the teacher network on the class probabilities while minimizing the loss function of our proposed model. We observed that the modeling of data uncertainty in our proposed model provides useful information to further improve the accuracy of the deterministic GNN model. Therefore, we consider propagating the useful information back to the teacher model to help train itself. Let us denote Prob(y | r;β) as the joint probability of class labels via a deterministic GNN model, where β refers to model parameters. The probability function Prob(y | r;γ,G) is estimated based on Eq. (8) using the variational parameters γ. We measure the closeness between Prob(y | r;β) and Prob(y | r;γ,G) with KL-divergence to be minimized while minimizing their own loss functions based on the labeled nodes. This leads to solving the following optimization problem:\nminγ,β L(γ) + L(β) + λ · ( KL[Prob(y | r;γ,G) ‖ Prob(y | r;β)] + KL[Dir(α)‖Dir(α̂)] ) (11)\nwhere L(β) is the loss function (i.e., cross entropy) of the deterministic GNN model and λ is a trade-off parameter. Our inference algorithm using backpropagation is detailed in the Appendix." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we describe our experimental settings and demonstrate the performance of our proposed model based on semi-supervised node classification. For the performance comparison and analysis of our model and other existing counterparts, we demonstrate and analyze the obtained results in terms of the overall classification accuracy." }, { "heading": "4.1 DATASETS", "text": "We use six datasets, including three citation network datasets (Sen et al., 2008) (i.e., Cora, Citeseer, Pubmed) and three new datasets (Shchur et al., 2018) (i.e., Coauthor Physics, Amazon Computer, and Amazon Photo). We summarize the description and experimental setup of the used datasets in Table 1. For all the used datasets, we deal with undirected graphs with 20 training nodes for each\ncategory. We chose the same dataset splits as in (Yang et al., 2016) with an additional validation node set of 500 labeled examples for the hyperparameter obtained from the citation datasets, and followed the same dataset splits in (Shchur et al., 2018) for Coauthor Physics, Amazon Computer, and Amazon Photo datasets, for fair comparison." }, { "heading": "4.2 COMPARING SCHEMES", "text": "We conduct the extensive comparative performance analysis based on our proposed models and a number of other state-of-the-art counterparts. Our proposed Subjective GNN models are: (1) S-GNN (Subjective GNN) with vacuity and dissonance estimation, which outputs subjective opinion (Dirichlet distribution) instead of softmax probability; (2) S-BGNN (Subjective Bayesian GNN), Subjective Graph Neural Network with Bayesian framework with multiple type uncertainty estimation; (3) S-BGNN-T (S-BGNN with Teacher network), help improve the expected class probability estimation; (4) S-BGNN-T-K (S-BGNN-T with GKDE), help improve the Dirichlet distribution estimation. Here we use two popular GNN models: GCN (Kipf & Welling, 2016) and GAT (Veličković et al., 2018).\nOur proposed models are compared against a number of the state-of-the-art counterparts. For evaluating three citation datasets (i.e., Cora, Citeseer, Pubmed), we compared our models with: (1) GCN (Kipf & Welling, 2016); (2) GAT (Veličković et al., 2018); (3) nonparametric Bayesian GCNN (BGCNN) (Pal et al., 2019); (4) Bayesian GCN (Zhang et al., 2019); (5) MC-dropout for Bayesian GNN (GCN-Drop, GAT-Drop) (Ryu et al., 2019); (6) skip-gram based graph embeddings (DeepWalk) (Perozzi et al., 2014); (7) iterative classification algorithm (ICA) (Lu & Getoor, 2003); and (8) Planetoid (Yang et al., 2016). We selected these for the comparison with our models based on (Veličković et al., 2018) for fair comparison with the latest comparable models. Using Coauthor Physics, Amazon Computer, and Amazon Photo, we compared the performance of our models with that of GCN and GAT(Shchur et al., 2018), we can not show SGAT (S-BGAT) due to memory limited, More details of model setup refer to the Appendix A." }, { "heading": "4.3 EXPERIMENTAL RESULTS & ANALYSIS", "text": "In Table 2, we summarized the mean percentage of classification accuracy with a standard deviation of each model compared in this experiment. The results prove that our model achieves the best accuracy result across all datasets except Citeseer. To be specific, our proposed S-BGCN-T is able to improve over GCN by a margin of 0.7%, 1.2%, 0.2%, 0.2%, 4.5% and 0.7% on Cora, Citeseer, Pumbed, Coauthor Physics, Amazon Computer, and Amazon Photo, respectively. In addition, our proposed S-BGAT-T model improves 0.8% for both Cora and Citeseer datasets over GAT. Notice that S-BGNN-T even outperforms S-BGNN particularly on the Cora and Citeseer dataset (i.e., 1% - 1.3% increase). These results prove that the teacher network can prevent overfitting, leading to a further improvement in classification prediction." }, { "heading": "5 UNCERTAINTY EXPERIMENT AND ANALYSIS", "text": "In Section 4, we showed that our S-BGNN-T improves prediction performance. In this section, we study the effectiveness of prediction based on different types of uncertainty. We studied the different types of uncertainty-aware node classification and out-of-distribution in terms of the area under the ROC (AUROC) and Precision-Recall (AUPR) curves in both experiments as in (Hendrycks & Gimpel, 2016) for three citation network datasets. For the OOD detection, we randomly selected 1-3 categories as OOD categories and trained the models only based on training nodes of the other categories. Due to the space constraint, we summarize the description of datasets and experimental setup for the OOD detection in the Appendix. To better evaluate our multiple uncertainties, we compare our model with two baseline models: (1) GCN which uses GCN (Kipf & Welling, 2016) with the softmax probability entropy measuring uncertainty; and (2) GCN-Drop, where one of the two uncertainty types (i.e., aleatoric, or epistemic uncertainty) adapts Monte-Carlo Dropout (Gal & Ghahramani, 2016) into the GCN model (Ryu et al., 2019). In the OOD, we also consider Distributional uncertainty (Malinin & Gales, 2018)." }, { "heading": "5.1 QUALITY OF UNCERTAINTY METRICS", "text": "In Figure 2, we used S-BGCN-T-K to predict node classification when test nodes are selected based on the lowest uncertainty for a given type. First of all, all uncertainty types show decreasing precision as recall increases. This implies that all uncertainty types are to some extent the indicators of prediction accuracy because low uncertainty increases prediction accuracy. In Figure 2, we can observe almost 100% performance of precision when recall is close to zero on Cora and over 95% on Pubmed. Further, the outperformance of Dissonance uncer-\ntainty is obvious among all. This indicates that low uncertainty with few conflicting evidence is the most critical factor to enhance classification prediction accuracy, compared to low extent of other uncertainty types. In addition, although epistemic uncertainty was very low, epistemic uncertainty performs the worst among all. This also indicates that epistemic uncertainty is not necessarily helpful to enhance prediction accuracy in semi-supervised node classification. Lastly, we found that vacuity is not as important as dissonance because accurate prediction is not necessarily dependent upon a large amount of information, but is more affected by less conflicting (or more agreeing) evidence to support a single class. In Table 3, although all BGCN-T models with the five different uncertainty types do not necessarily outperform all the existing models (i.e., GCN Entropy and variants of GCN-Drop.), the outperformance of Dissonance is fairly impressive. This result confirmed that low uncertainty caused by dissonance is the key to maximize node classification prediction accuracy. When compare S-BGCN-T and S-BGCN-T-K, we found GKDE can only help improve performance a little. To better understand different uncertainty types, we used t-SNE (Maaten & Hinton, 2008) to represent the computed\nfeature representations of a pre-trained S-BGCN-T-K model’s first hidden layer on the Cora dataset in Figure 3." }, { "heading": "5.2 OUT-OF-DISTRIBUTION DETECTION", "text": "In this section, we discuss how different uncertainty types can prove the performance in the outof-distribution (ODD) detection. In Table 4, we considered 6 uncertainties with 3 models for our performance comparison. Note that Distributional uncertainty is the the most recent model showing the best performance in the OOD detection. Across the three citation network datasets, particularly SBGCN-T-K Vacuity showed significantly better performance, strikingly outperforming Distributional uncertainty. Notice that S-BGCN-T-K outperforms S-BGCN-T (i.e., 4% - 7% increase), especially the improvement of vacuity. These result prove that the GKDE can improve the Dirichlet distribution estimation, leading to a better uncertainty estimation." }, { "heading": "6 CONCLUSION", "text": "In this work, we proposed a Subjective Bayesian GNNs framework for uncertainty-aware semisupervised node classification and out-of-distribution (OOD) detection for GNNs. Our proposed framework provides an effective, efficient way of predicting node classification and detecting OOD considering multiple uncertainty types. We leveraged the estimation of various types of uncertainty from both DL and evidence/belief theory domains. In addition, We leveraged the Teacher network to help refine the classification probability and GKDE to accuractly estimate Dirichlet distribution. The key findings from this study include: • For the overall classification prediction, our proposed S-BGNN-T outperformed the competitive\nbaselines on most datasets. The key role to improve the accuracy is Teacher Network. • For the node classification prediction considering various uncertainty types, we found that disso-\nnance (i.e., uncertainty derived from conflicting evidence) played a significant role to improve classification prediction accuracy. • For the OOD detection, vacuity uncertainty played a key role when S-BGCN-T-K is used to detect OOD. This means that less information and/or more randomness (or less predictability) enables detecting OOD more effectively. More impressively, GKDE can indeed help to estimate Dirichlet distribution accurately so that enhance the vacuity performance. Also vacuity outperformed the most recent counterpart, Distributional uncertainty." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 SOURCE CODE", "text": "For review purpose, the source code and datasets are accessible at https://www.dropbox. com/sh/cs5gs2i1umdx4b6/AAC-r_EYRw9lryk95giqW8-Fa?dl=0" }, { "heading": "DESCRIPTION OF DATASETS", "text": "Cora, Citeseer, and Pubmed (Sen et al., 2008): These are citation network datasets, where a network is a directed network where a node represents a document and an edge is a citation link, meaning that there exists an edge when A document cites B document, or vice-versa with a direction. Each node’s feature vector contains a bag-of-words representation of a document. For simplicity, we don’t discriminate the direction of links and treat citation links as undirected edges and construct a binary, symmetric adjacency matrix A. Each node is labeled with the class to which it belongs.\nCoauthor Physics, Amazon Computers, and Amazon Photo (Shchur et al., 2018): Coauthor Physics is the dataset for co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 Challenge1. In the graphs, a node is an author and an edge exists when two authors co-author a paper. A node’s features represent the keywords of its papers and the node’s class label indicates its most active field of study. Amazon Computers and Amazon Photo are the segments of an Amazon co-purchase graph (McAuley et al., 2015), where a node is a good (i.e., product), an edge exists when two goods are frequently bought together. A node’s features are bag-of-words representation of product reviews and the node’s class label is the product category.\nIn semi-supervised node classification task,the training and test nodes are selected based on (Sen et al., 2008) for the citation network datasets and are randomly selected for the Coauthor Physics, Amazon Computers, and Amazon Photo." }, { "heading": "A.2 EXPERIMENTAL SETUP FOR OUT-OF-DISTRIBUTION (OOD) DETECTION", "text": "For OOD detection, we summarize the experiment setup for the use of the three citation network datasets (i.e., Cora, Citeseer, and Pubmed) in Table 5. In this setting, we still focus on the semisupervised node classification task, but only part of node categories are not using for training. Hence, we suppose that our model only outputs partial categories (as we don’t know the OOD category). For example, Cora dataset, we train the model with 80 nodes (20 nodes for each category) with the predictions of 4 categories. Positive ratio is the ratio of out-of-distribution nodes among on all test nodes." }, { "heading": "A.3 CALCULATION OF AUPR AND AUROC", "text": "For the calculation of precision, recall, TPR, and FPR, we select a certain φ % of nodes out of test nodes to label them as positive (correct) based on the extent of uncertainty, the lowest uncertainty for classification prediction and the highest uncertainty for OOD detection. And the remaining test nodes (i.e., 100− φ %) are labeled as negative. Each test node’s prediction is checked with its ground truth to derive AUPR and AUROC.\n1KDD Cup 2016 Dataset: Online Available at https://kddcup2016.azurewebsites.net/" }, { "heading": "TIME COMPLEXITY ANALYZE", "text": "BGCN has a similar time complexity with GCN while BGCN-T has the double complexity of GCN. In the revised paper, we will add a table showing Big-O complexity of all schemes considered. For a given network where |V| is the number of nodes, |E| is the number of edges, C is the number of dimensions of the input feature vector for every node, and F is the number of features for the output layer, the complexity of the compared schemes are: O(|E|CF ) for GCN, O(|E|CF ) for S-BCGN, O(2|E|CF ) for S-BCGB-T and S-BCGB-T-K,O(|V|CF+ |E|F ) for GAT, andO(2|V|CF+2|E|F ) for S-BGAT-T and S-BGAT-T-K." }, { "heading": "A.4 MODEL SETUPS FOR SEMI-SUPERVISED NODE CLASSIFICATION", "text": "Our models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to minimize loss using the Adam SGD optimizer (Kingma & Ba, 2014). As our proposed models (i.e., S-BGCNT-K, S-BGAT-T-K) need a discriminative model to refine inference, we use standard GCN and GAT models as teacher networks for S-BGCN-T-K and S-BGAT-T-K, respectively. For the S-BGCN-T-K model, we use the early stopping strategy (Shchur et al., 2018) on Coauthor Physics, Amazon Computer and Amazon Photo datasets while non-early stopping strategy is used in citation datasets (i.e., Cora, Citeseer and Pubmed). We set bandwidth σ = 1 for all datasets in GKDE, and set trade off parameters λ = min(1, t/200) (where t is the index of a current training epoch); and other hyperparameter configurations are summarized in Table 6. The S-BGAT-T-K model has two dropout probabilities, which are a dropout on features and a dropout on attention coefficients, as showed in Table 7. We changed the dropout on attention coefficients to 0.4 at the test stage and set trade off parameters λ = min(1, t/50), using the same early stopping strategy (Veličković et al., 2018). Note that lack of memory (we used one Titan X GPU, 12 GB memory), we could not obtain the result for GAT (also S-BGAT) on Coauthor Physics, Amazon Computer and Amazon Photo, which are very dense datasets.\nFor semi-supervised node classification, we use 50 random weight initialization for our models on Citation network datasets. For Coauthor Physics, Amazon Computer and Amazon Photo datasets, we report the result based on 10 random train/validation/test splits. In both effect of uncertainty on classification prediction accuracy and the OOD detection, we report the AUPR and AUROC results in percent averaged over 50 times of randomly chosen 1000 test nodes in all of test sets (except training or validation set) for all models tested on the citation datasets. For S-BGCN-T-K model in these tasks, we use the same hyperparameter configurations as in Table 6, except S-BGCN-T-K Epistemic using 20,000 epochs to obtain the best result. For baseline models, GCN-Drop. models use the same hyperparameters as in Table 6 to achieve the best performance, also using 20,000 training epochs for GCN-Drop. Epistemic. GCN Entropy uses the same hyperparameter configurations in (Kipf & Welling, 2016)." }, { "heading": "A.5 ALGORITHM FOR OUR ALGORITHM", "text": "Algorithm 1: S-BGNN-T-K with jointly train for Teacher network Input: G = (V,E, r) and yL Output: pV\\L, uV\\L\n1 ` = 0; 2 Set hyper-parameters; 3 Initialize the parameters γ, β; 4 Calculate the prior Dirichlet distribution Dir(α̂); 5 repeat 6 Forward pass to compute α, Prob(pi|r;G), Prob(yi|r;β) for i ∈ V; 7 Compute joint probability Prob(y|r;G), Prob(y|r;β); 8 Backward pass via the chain-rule the calculate the sub-gradient gradient: g(`) = ∇ΘL(Θ) 9 Update parameters using step size η via Θ(`+1) = Θ(`) − η · g(`)\n10 ` = `+ 1; 11 until convergence 12 Calculate pV\\L, uV\\L 13 return pV\\L, uV\\L\nAlgorithm 2: S-BGNN-T-K with pre-train for Teacher network Input: G = (V,E, r) and yL Output: pV\\L, uV\\L\n1 ` = 0; 2 Set hyper-parameters; 3 Initialize the parameters γ, β; 4 Calculate the prior Dirichlet distribution Dir(α̂); 5 Pre-train the Teacher Network to get Prob(y|r;β) 6 repeat 7 Forward pass to compute α, Prob(pi|r;G) for i ∈ V; 8 Compute joint probability Prob(y|r;G); 9 Backward pass via the chain-rule the calculate the sub-gradient gradient: g(`) = ∇ΘL(Θ)\n10 Update parameters using step size η via Θ(`+1) = Θ(`) − η · g(`) 11 ` = `+ 1; 12 until convergence 13 Calculate pV\\L, uV\\L 14 return pV\\L, uV\\L" }, { "heading": "B ADDITIONAL EXPERIMENT RESULTS", "text": "Further experiment have been run in addition to the uncertainty analysis in section 5. First, we show the ablation experiment for each compents we proposed. Second, we show more uncertainty visualization result in network node classification for Citeseer dataset. To better understand the performance of uncertainty quality clearly for each uncertainty, we show the AUROC and AUPR curves for all models and uncertainties." }, { "heading": "B.1 ABLATION EXPERIMENTS", "text": "We conducted an additional experiment to ensure the benefit of the teacher network. We anticipate that the graph kernel prior will improve the estimation accuracy of Dirichlet distribution. However, due to the space constraint, we didn’t show the classification results without using the graph kernel prior. In the revised version, we added a detailed ablation study in the revised paper in order to clearly demonstrate the contribution of the key technical components, including a teacher Network, Graph kernel Dirichlet Estimation (GKDE) and subjective Bayesian framework. The key findings obtained from this experiment are: (1) The teacher Network can further improve node classification accuracy (i.e., 0.2% - 1.5% increase, as shown in Table 2); and (2) GKDE (graph kernel prior) using the uncertainty estimates can enhance OOD detection (i.e., 4% - 7% increase, as shown in Table 9)." }, { "heading": "B.2 GRAPH EMBEDDING REPRESENTATIONS OF DIFFERENT UNCERTAINTY TYPES", "text": "To better understand different uncertainty types, we used t-SNE (t-Distributed Stochastic Neighbor Embedding (Maaten & Hinton, 2008)) to represent the computed feature representations of a pretrained BGCN-T model’s first hidden layer on the Citeseer dataset.\nSix Classes on Citeseer Dataset: In Figure 4 (a), a node’s color denotes a class on the Citeseer dataset where 6 different classes are shown in different colors. Figure 4 (b) is our prediction result.\nFor Figures 4 (c)-(f), the extent of uncertainty is presented where a blue color refers to the lowest uncertainty (i.e., minimum uncertainty) while a red color indicates the highest uncertainty (i.e., maximum uncertainty) based on the presented color bar. To examine the trends of the extent of uncertainty depending on either training nodes or test nodes, we draw training nodes as bigger circles than test nodes. Overall we notice that most training nodes (shown as bigger circles) have low uncertainty (i.e., blue), which is reasonable because the training nodes are the ones that are already observed. Now we discuss the extent of uncertainty under each uncertainty type.\nVacuity: In Figure 4 (c), although most training nodes show low uncertainty, we observe majority of test nodes in the mid cluster show high uncertainty as appeared in red.\nDissonance: In Figure 4 (d), similar to vacuity, training nodes have low uncertainty. But unlike vacuity, test nodes are much less uncertain. Recall that dissonance represents the degree of conflicting evidence (i.e., discrepancy between each class probability). However, in this dataset, we observe a fairly low level of dissonance and the obvious outperformance of Dissonance in node classification prediction.\nAleatoric uncertainty: In Figure 4 (e), a lot of nodes show high uncertainty with larger than 0.5 except a small amount of training nodes with low uncertainty. High aleatoric uncertainty positively affects, showing high performance in OOD detection.\nEpistemic uncertainty: In Figure 4 (f), most nodes show very low epistemic uncertainty because uncertainty derived from model parameters can disappear as they are trained well. Therefore, nondistinctive low uncertainty for most nodes do not help much to select good test nodes to improve performance in node classification." }, { "heading": "B.3 PR AND ROC CURVES", "text": "AUPRC for the OOD Detection: Figure 6 shows the AUPRC for the OOD detection when S-BGCNT-K is used to detect OOD in which test nodes are considered based on their high uncertainty level, given a different uncertainty type, such as vacuity, dissonance, aleatoric, epistemic, or entropy (or total uncertainty). Also to check the performance of the proposed models with a baseline model, we added S-BGCN-T-K with test nodes randomly selected (i.e., Random).\nObviously, in Random baseline, precision was not sensitive to increasing recall while in S-BGCN-T-K (with test nodes being selected based on high uncertainty) precision decreases as recall increases. But although most S-BGCN-T-K models with various uncertainty types used to select test nodes shows sensitive precision to increasing recall (i.e., proving uncertainty being an indicator of improving OOD detection), S-BGCN-T-K Epistemic even performed worse than the baseline (i.e., S-BGCN-T-K Random). This is because epistemic uncertainty cannot distinguish the flat Dirichlet distribution (α = (1, . . . , 1)) from sharp Dirichlet distribution (α = (10, . . . , 10)), which leads to no effective selection of test nodes for improving the performance in OOD detection. In addition, unlike AUPR in node classification prediction, which showed the best performance in S-BGCN-T-K Dissonance (see Figure 5), S-BGCN-T-K Dissonance showed the second worst performance among the proposed S-BGCN-T-K models with other uncertainty types. This means that less conflicting information does not help OOD detection. On the other hand, overall we observe Vacuity performs the best among all while S-BGCN-T-K Entropy also performs fairly well as the third best. From this finding, we can claim that to improve OOD detection, more randomness with high aleatoric uncertainty and less information with high vacuity can help boost the accuracy of the OOD detection. Although the uncertainty level observed from aleatoric uncertainty and entropy was quite similar, the performance in OOD detection is not necessarily similar, as shown in Figures 6 (b) and (c) on Citeseer and Pubmed. The reason is that BCGN-T Aleatoric provides test nodes with more distinctive uncertainty levels while BCGN-T Entropy doesn’t. This is because BCGN-T Entropy combines the aleatoric and epistemic uncertainty where epistemic uncertainty is mostly highly low, ultimately leading to poor distinctions of nodes based on different uncertainty levels.\nAUROC for the OOD Detection: First, we investigated the performance of our proposed S-BGCNT-K models when test nodes are selected based on seven different criteria (i.e., 6 uncertainties and random). Like AUPR in Figure 5, based on S-BGCN-T-K, we considered a baseline by selecting\ntest nodes randomly while five different uncertainty types are used to select test nodes based on the order of high uncertainty. For AUROC in Figure 7, we observed much better performance in most S-BGCN-T-K models with all uncertainty types except epistemic uncertainty. Although epistemic uncertainty is known to be effective to improve OOD detection (Kendall & Gal, 2017) in computer vision applications, our result showed fairly poor performance compared to the case other uncertainty types are used. This is because our experiment is conducted with a very small of training nodes (i.e., 3% on Cora, 2% on Citeseer, 0.2% on Pubmed) which is highly challenging to observe high performance particularly with epistemic uncertainty. Recall that we used 200 epochs to train nodes for all models except BCGN-T Epistemic which was trained with 20,000 epochs. In this experiment, even S-BGCN-T-K Vacuity performed the best although S-BGCN-T-K Dissonance, S-BGCN-T-K Aleatoric, or S-BGCN-T-K Entropy performs comparably. But on Citeseer and Pubmed datasets, we also observed relatively low performance with S-BGCN-T-K Dissonance. This finding is also well aligned with what we observed in Table 4 (in paper). S-BGCN-T-K Vacuity performs the best on all three datesets. Obviously S-BGCN-T-K Vacuity outperform S-BGCN-T-K Distributional in OOD detection." }, { "heading": "B.4 ANALYZE FOR EPISTEMIC IN OOD DETECTION", "text": "In OOD detection, epistemic uncertainty performed the worst because it cannot distinguish the flat Dirichlet distribution (α = (1, . . . , 1)) from sharp Dirichlet distribution (α = (10, . . . , 10)), resulting poor performance in OOD detection. Unlike AUPR in node classification prediction with outperformance in S-BGCN-T-K Dissonance (see Figure 2), S-BGCN-T-K Dissonance showed the second worst performance among the proposed S-BGCN-T-K models with other uncertainty types. This implies that less conclusive belief mass does not help OOD detection.\nAlthough epistemic uncertainty is known to be effective to improve OOD detection (Kendall & Gal, 2017) in computer vision applications, our result showed fairly poor performance compared to the case other uncertainty types are used. This is because our experiment is conducted with a very small of training nodes (i.e., 3% on Cora, 2% on Citeseer, 0.2% on Pubmed) which is highly challenging to observe high performance particularly with epistemic uncertainty." }, { "heading": "C DERIVATIONS FOR UNCERTAINTY MEASURES AND KL DIVERGENCE", "text": "This appendix provides the derivations and shows how calculate the uncertainty measures discussed in section 3 for BGCN. Additionally, it describes how to calculate the joint probability, Dirichlet parameters and KL-divergence between Prob(y|r;β) and Prob(y|r;γ,G)." }, { "heading": "C.1 UNCERTAINTY MEASURES", "text": "Vacuity uncertainty of Bayesian Graph neural networks for node i:\nVacuity[pi] = EProb(θ|G)[vi]\n= EProb(θ|G) [ K/ K∑ k=1 αik ] ≈ Eq(θ) [ K/\nK∑ k=1 αik ] ≈ 1\nM M∑ m=1 [ K/ K∑ k=1 α (m) ik ] , α(m) = f(r,θ(m)), θ(m) ∼ q(θ)\nDissonance uncertainty of Bayesian Graph neural networks for node i: Disso.[pi] = EProb(θ|G) [ ω(bi) ] ≈ Eq(θ) [ ω(bi)\n] ≈ 1\nM M∑ m=1 [ ω(bi) ] , θ(m) ∼ q(θ)\nand\nω(bi) = K∑ k=1 (bik∑Kj=1,j 6=k bijBal(bij , bik)∑K j=1,j 6=k bij ) ,\nwhere the relative mass balance between a pair of belief masses bij and bik is expressed by Bal(bij , bik) = 1− |bij − bik|/(bij + bik). Aleatoric uncertainty of Bayesian Graph neural networks for node i, followed (Malinin & Gales, 2018):\nAleatoric[pi] = EProb(θ|G) [ H(yi|r;θ) ] ≈ Eq(θ) [ H(yi|r;θ)\n] ≈ 1\nM M∑ m=1 H [ (yi|r;θ(m)) ] , θ(m) ∼ q(θ)\n≈ 1 M M∑ m=1 K∑ j=1 Prob(yi = j|r;θ(m)) log ( Prob(yi = j|r;θ(m)) ) , θ(m) ∼ q(θ)\nEpistemic uncertainty of Bayesian Graph neural networks for node i, followed (Gal, 2016): Epistemic[pi] = H [ EProb(θ|G)[(yi|r;θ)] ] − EProb(θ|G) [ H(yi|r;θ) ] ≈ H [ Eq(θ)[(yi|r;θ)] ] − Eq(θ) [ H(yi|r;θ)\n] ≈ H\n[ 1 M M∑ m=1 Prob(yi|r;θ(m)) ] − 1 M M∑ m=1 H [ (yi|r;θ(m)) ] , θ(m) ∼ q(θ)" }, { "heading": "C.2 JOINT PROBABILITY", "text": "At the test stage, we infer the joint probability by: p(y|r;G) = ∫ ∫ Prob(y|p)Prob(p|r;θ)Prob(θ|G)dpdθ\n≈ ∫ ∫ Prob(y|p)Prob(p|r;θ)q(θ)dpdθ\n≈ 1 M M∑ m=1 ∫ Prob(y|p)Prob(p|r;θ(m))dp, θ(m) ∼ q(θ)\n≈ 1 M M∑ m=1 ∫ N∑ i=1 Prob(yi|pi)Prob(pi|r;θ(m))dpi, θ(m) ∼ q(θ)\n≈ 1 M M∑ m=1 N∑ i=1 ∫ Prob(yi|pi)Prob(pi|r;θ(m))dpi, θ(m) ∼ q(θ)\n≈ 1 M M∑ m=1 N∏ i=1 ∫ Prob(yi|pi)Dir(pi|α (m) i )dpi, α (m) = f(r,θ(m)), q θ(m) ∼ q(θ)\nwhere the posterior over class label p will be given by the mean of the Dirichlet: Prob(yi = p|θ(m)) = ∫ Prob(yi = p|pi)Prob(pi|r;θ(m))dpi = α (m) ip∑K\nk=1 α (m) ik\nThe probabilistic form for a specific node i by using marginal probability, Prob(yi|r;G) = ∑ y\\yi Prob(y|r;G)\n= ∑ y\\yi ∫ ∫ N∏ j=1 Prob(yj |pj)Prob(pj |r;θ)Prob(θ|G)dpdθ\n≈ ∑ y\\yi ∫ ∫ N∏ j=1 Prob(yj |pj)Prob(pj |r;θ)q(θ)dpdθ\n≈ M∑ m=1 ∑ y\\yi ∫ N∏ j=1 Prob(yj |pj)Prob(pj |r;θ(m))dp, θ(m) ∼ q(θ)\n≈ M∑ m=1 [∑ y\\yi ∫ N∏ j=1 Prob(yj |pj)Prob(pj |r;θ(m))dpj ] , θ(m) ∼ q(θ)\n≈ M∑ m=1 [∑ y\\yi N∏ j=1,j 6=i Prob(yj |rj ;θ(m)) ] Prob(yi|r;θ(m)), θ(m) ∼ q(θ)\n≈ M∑ m=1 ∫ Prob(yi|pi)Prob(pi|r;θ(m))dpi, θ(m) ∼ q(θ)\nspecifically for probability of label p,\nProb(yi = p|r;G) ≈ 1\nM M∑ m=1 α (m) ip∑K k=1 α (m) ik , α(m) = f(r,θ(m)), θ(m) ∼ q(θ)" }, { "heading": "C.3 KL-DIVERGENCE", "text": "KL-divergence between Prob(y|r;β) and Prob(y|r;γ,G): KL[Prob(y|r;G)||Prob(y|r;β))] = EProb(y|r;G) [\nlog Prob(y|r;G) Prob(y|r;β) ] ≈ EProb(y|r;G) [ log\n∏N i=1 Prob(yi|r;G)∏N i=1 Prob(yi|r;β) ] ≈ EProb(y|r;G)\n[ N∑ i=1 log Prob(yi|r;G) Prob(yi|r;β) ] ≈\nN∑ i=1 EProb(y|r;G) [ log Prob(yi|r;G) Prob(yi|r;β) ] ≈\nN∑ i=1 K∑ j=1 Prob(yi = j|r;G) ( log Prob(yi = j|r;G) Prob(yi = j|r;β) ) The KL divergence between two Dirichlet distributions Dir(α) and Dir(α̂) can be obtained in closed form as follows:\nKL[Dir(α)‖Dir(α̂)] = ln Γ(S)− ln Γ(Ŝ) + K∑ c=1 ( ln Γ(α̂c)− ln Γ(αc) ) + K∑ c=1 (αc − α̂c)(ψ(αc)− ψ(S))\nwhere S = ∑K c=1 αc and Ŝ = ∑K c=1 α̂c" } ]
2,019
null